For this assigment we choose kaggle dataeset - Dog Breed Identification (https://www.kaggle.com/c/dog-breed-identification) This dataset contains Dogs of different breeds pictures, and in this image classification project we will try to fit the breed to the dog in each picture
import pandas as pd
import numpy as np
import os
import cv2
import tensorflow as tf
from sklearn.model_selection import KFold, StratifiedKFold
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Input, Activation , AvgPool2D,MaxPool2D,Dropout , BatchNormalization
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import shutil
import random
from tensorflow import keras
from matplotlib import pyplot as plt
import matplotlib.image as mpimg
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.preprocessing.image import img_to_array
from numpy import expand_dims
from matplotlib import pyplot
from os import listdir
from os.path import isfile, join
from PIL import Image
import csv
%matplotlib inline
train_set_images_size = {}
train_dir = r'C:\Users\shachar meretz\Downloads\dog-breed-identification\train\train'
train_set_size=len([name for name in os.listdir(train_dir) if os.path.isfile(os.path.join(train_dir, name))])
test_dir = r'C:\Users\shachar meretz\Downloads\dog-breed-identification\test\all_classes'
test_set_size=len([name for name in os.listdir(test_dir) if os.path.isfile(os.path.join(test_dir, name))])
print("Size Of Train Set : {}".format(train_set_size))
print("Size Of Test Set : {}".format(test_set_size))
# Get the dimenssions of each image from the test set
test_set_images_size = {}
test_dir = r'C:\Users\shachar meretz\Downloads\dog-breed-identification\test\all_classes'
for img in os.listdir(test_dir):
image_path = os.path.join(test_dir,img)
image = cv2.imread( image_path, cv2.COLOR_BGR2RGB)
dim = '({},{},{})'.format(image.shape[0] , image.shape[1],image.shape[2])
if dim in test_set_images_size:
test_set_images_size[dim] = test_set_images_size[dim] + 1
else:
test_set_images_size[dim] = 1
# Get the dimenssions of each image from the test set
train_set_images_size = {}
for img in os.listdir(train_dir):
image_path = os.path.join(train_dir,img)
image = cv2.imread( image_path, cv2.COLOR_BGR2RGB)
dim = '({},{},{})'.format(image.shape[0] , image.shape[1],image.shape[2])
if dim in train_set_images_size:
train_set_images_size[dim] = train_set_images_size[dim] + 1
else:
train_set_images_size[dim] = 1
Size Of Train Set : 10222 Size Of Test Set : 10357
#Image Dimenssions Distribution - top 20
train_dims_df = pd.DataFrame(columns = ['dim' , 'values'])
test_dims_df = pd.DataFrame(columns = ['dim' , 'values'])
train_dims_df['dim'] = train_set_images_size.keys()
train_dims_df['values'] = train_set_images_size.values()
test_dims_df['dim'] = test_set_images_size.keys()
test_dims_df['values'] = test_set_images_size.values()
train_dims_df.sort_values('values' , inplace=True , ascending=False)
test_dims_df.sort_values('values' , inplace=True , ascending=False)
def Image_Dims_Distribution(df):
fig, ax = plt.subplots(figsize =(16, 9))
dims = df['dim'].head(20)
values = df['values'].head(20)
ax.barh(dims, values , color="grey")
for s in ['top', 'bottom', 'left', 'right']:
ax.spines[s].set_visible(False)
ax.xaxis.set_ticks_position('none')
ax.yaxis.set_ticks_position('none')
ax.xaxis.set_tick_params(pad = 5)
ax.yaxis.set_tick_params(pad = 10)
ax.grid(b = True, color ='grey',
linestyle ='-.', linewidth = 0.5,
alpha = 0.2)
ax.invert_yaxis()
for i in ax.patches:
plt.text(i.get_width()+0.2, i.get_y()+0.5,
str(round((i.get_width()), 2)),
fontsize = 18, fontweight ='bold',
color ='grey'
)
plt.xlabel('Number Of Images', size = 30)
plt.ylabel('Dimenssions', size = 30)
plt.title('Image Dimenssions Distribution', size = 40)
plt.xticks(size = 18 , color="maroon")
plt.yticks(size = 18 , color="maroon")
plt.show()
done
Image_Dims_Distribution(train_dims_df)
done
Image_Dims_Distribution(test_dims_df)
done
# print histograsma of the labels samples
def Image_Lables_Distribution(df):
fig, ax = plt.subplots(figsize =(56, 100))
lables = df['breed']
values = df['number of images']
ax.barh(lables, values , color="grey" )
for s in ['top', 'bottom', 'left', 'right']:
ax.spines[s].set_visible(False)
ax.xaxis.set_ticks_position('none')
ax.yaxis.set_ticks_position('none')
ax.xaxis.set_tick_params(pad = 20)
ax.yaxis.set_tick_params(pad = 40)
ax.grid(b = True, color ='grey',
linestyle ='-.', linewidth = 0.5,
alpha = 0.2)
ax.invert_yaxis()
for i in ax.patches:
plt.text(i.get_width()+0.2, i.get_y()+0.5,
str(round((i.get_width()), 2)),
fontsize = 40, fontweight ='bold',
color ='black')
plt.xlabel('Number Of Images', size = 70)
plt.ylabel('Classes', size = 70)
plt.title('Number Of Images For Classes', size = 70)
plt.xticks(size = 40 , color="maroon")
plt.yticks(size = 40 , color="maroon")
plt.show()
lables_df = pd.read_csv(r'C:\Users\shachar meretz\Downloads\dog-breed-identification\labels.csv' , engine="python")
images_by_lable = pd.DataFrame()
images_by_lable['number of images'] = lables_df.groupby('breed').size()
images_by_lable.sort_values('number of images' , inplace=True , ascending=False)
images_by_lable.reset_index(inplace=True)
print("number of classes in the data set: " + str(len(images_by_lable.index)))
Image_Lables_Distribution(images_by_lable)
number of classes in the data set: 120
done
VGGNet 19 : Accuracy: 83% Log Loss: 0.56
Inception V3: Accuracy: 87% Log Loss: 0.47
ResNet50: Accuracy: 90% Log Loss: 0.38
Xception: Accuracy: 89% Log Loss: 0.42
DenseNet: Accuracy: 91% Log Loss: 0.36
SENet: Accuracy: 89% Log Loss: 0.38
ResNext: Accuracy: 93% Log Loss: 0.22
InceptionV4: Accuracy: 94% Log Loss: 0.20
InceptionResnetV2: Accuracy: 95% Log Loss: 0.19
Ensembling InceptionResNetV2, InceptionV4 and ResNext: Accuracy: 96% Log Loss: 0.16
In addition , the suggestion input size is (299-400,299-400).
Values of Log Loss that other models get: (acquired through 5 fold cv, between folds score might very from less than 0.17 to 0.26):
inception_4_300 - 0.228
inception_4_350 - 0.211
inception_4_400 - 0.204
inception_4_450 - 0.223
inceptionresnet_2_300 - 0.239
inceptionresnet_2_350 - 0.217
inceptionresnet_2_400 - 0.215
inceptionresnet_2_450 - 0.222
train_dir = r'/content/drive/MyDrive/Data/train'
def Display_Images(data_set,data_set_lables,nrow=2,ncol=2,figsize=(20,20),preds=None):
fig,ax = plt.subplots(nrows=nrow,ncols=ncol,figsize=figsize)
fig.subplots_adjust(hspace=0.1, wspace=-0.7)
for i in range(nrow*ncol):
ax[i//ncol,i%ncol].imshow(data_set[i],cmap='binary')
ax[i//ncol,i%ncol].set_xticks([])
ax[i//ncol,i%ncol].set_yticks([])
if preds is not None:
ax[i//ncol,i%ncol].text(0.85, 0.1, str(preds[i]), transform=ax[i//ncol,i%ncol].transAxes,
color='green' if data_set_lables[i]==preds[i] else 'red',weight='bold')
ax[i//ncol,i%ncol].text(0.05, 0.1, str(data_set_lables[i]) + '\n', color='yellow',transform=ax[i//ncol,i%ncol].transAxes,weight='bold')
else:
ax[i//ncol,i%ncol].text(0.05, 0.1, str(data_set_lables[i]), color='black',transform=ax[i//ncol,i%ncol].transAxes,weight='bold' , fontsize=18)
plt.show()
img_data_array=[]
image_names=[]
for i in range(40):
file = random.choice(os.listdir(train_dir))
image_names.append(file.split('.')[0])
image_path= os.path.join(train_dir,file)
image=load_img( image_path)
image=image.resize((400, 400))
image=np.array(image)
image=image.astype('float32')
image /= 255
img_data_array.append(image)
lables_df = pd.read_csv(r'/content/drive/MyDrive/Data/labels.csv' , engine="python")
lables_df.set_index('id' , inplace=True)
data_set_lables=[]
for name in image_names:
lbl = lables_df.loc[name , 'breed']
data_set_lables.append(lbl)
Display_Images(img_data_array,data_set_lables,10,4,(30,30))
as we can see, there are a lot of pictures that we can assume will be challenges for our model. pictures with another animal (like sheep or cat) , pictures with 2 diffrents dogs , picture with people and picture with a lot of background noise(like super marker or nature)
# General function to diaply grid of images
def plot_grid_of_images(images , rows , cols , wspace , hspace ,lables=None):
fig,ax=plt.subplots(rows,cols, figsize=(20,20))
fig.subplots_adjust(hspace=hspace, wspace=wspace)
for i in range(cols*rows):
if(cols>1 and rows>1):
r = i//cols
c = i%cols
if(i<len(images)):
image=load_img(images[i])
image=image.resize((400, 400))
ax[r,c].imshow(image)
ax[r,c].set_xticks([])
ax[r,c].set_yticks([])
if lables is not None:
ax[r,c].text(0.05, 0.1, str(lables[i]), color='black',transform=ax[r,c].transAxes,weight='bold' , fontsize=18)
else:
r = i
c = i
if(i<len(images)):
image=load_img(images[i])
image=image.resize((400, 400))
ax[r].imshow(image)
ax[r].set_xticks([])
ax[r].set_yticks([])
if lables is not None:
ax[r,c].text(0.05, 0.1, str(lables[i]), color='black',transform=ax[r,c].transAxes,weight='bold' , fontsize=18)
plt.show()
mypath=r'C:\Users\shachar meretz\Downloads\dog-breed-identification\new'
onlyfiles = [os.path.join(mypath,f) for f in listdir(mypath) if isfile(join(mypath, f))]
plot_grid_of_images(onlyfiles , 2 ,4 , 0.2 , -0.5)
AS we can see, there are some dog breeds that are similar to others. we assume that our model can easily missclassify them
mypath=r'C:\Users\shachar meretz\Downloads\dog-breed-identification\missclasify'
onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]
i=0
lables_df = pd.read_csv(r'C:\Users\shachar meretz\Downloads\dog-breed-identification\labelsMissClass.csv' , engine="python")
lables_df=lables_df[0:6]
lables = []
for file in onlyfiles:
lbl=lables_df.loc[lables_df['id'] == file, 'breed'].iloc[0]
lables.append(lbl)
onlyfiles=[os.path.join(mypath,f) for f in onlyfiles]
plot_grid_of_images(onlyfiles , 3 ,2 , 0.1 , 0.1 , lables)
mypath=r'C:\Users\shachar meretz\Downloads\dog-breed-identification\easy classify'
onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]
lables_df = pd.read_csv(r'C:\Users\shachar meretz\Downloads\dog-breed-identification\labelsEasyClass.csv' , engine="python")
lables_df=lables_df[0:7]
lables = []
for file in onlyfiles:
lbl=lables_df.loc[lables_df['id'] == file, 'breed'].iloc[0]
lables.append(lbl)
onlyfiles=[os.path.join(mypath,f) for f in onlyfiles]
plot_grid_of_images(onlyfiles , 2 ,3 , 0.1 , -0.3 , lables)
img = load_img(r'C:\Users\ibitton\OneDrive - Intel Corporation\Desktop\Year 4\Deep Learning\Ass1\examples\a.jpeg')
pyplot.imshow(img)
pyplot.show()
data = img_to_array(img)
samples = expand_dims(data, 0)
datagen = ImageDataGenerator(rotation_range=100)
it = datagen.flow(samples, batch_size=1)
for i in range(9):
pyplot.subplot(330 + 1 + i)
batch = it.next()
image = batch[0].astype('uint8')
pyplot.imshow(image)
pyplot.show()
img = load_img(r'C:\Users\ibitton\OneDrive - Intel Corporation\Desktop\Year 4\Deep Learning\Ass1\examples\b.jpeg')
pyplot.imshow(img)
pyplot.show()
data = img_to_array(img)
samples = expand_dims(data, 0)
datagen = ImageDataGenerator(zoom_range=1)
it = datagen.flow(samples, batch_size=1)
for i in range(9):
pyplot.subplot(330 + 1 + i)
batch = it.next()
image = batch[0].astype('uint8')
pyplot.imshow(image)
pyplot.show()
img = load_img(r'C:\Users\ibitton\OneDrive - Intel Corporation\Desktop\Year 4\Deep Learning\Ass1\examples\d.jpeg')
pyplot.imshow(img)
pyplot.show()
data = img_to_array(img)
samples = expand_dims(data, 0)
datagen = ImageDataGenerator(height_shift_range=200)
it = datagen.flow(samples, batch_size=1)
for i in range(9):
pyplot.subplot(330 + 1 + i)
batch = it.next()
image = batch[0].astype('uint8')
pyplot.imshow(image)
pyplot.show()
Forming a nueral network and use 5Fold cross validation to measure model performance. this fit model function create 5Fold cross validation and fit the weights models with the data. Moreover, this fit model use for each fold 30 epochs and use callbacks for eralyStopping (if the model val accuracy didn't upgrade fot 10 epochs) and save best weights. at the end , it display the accuracy and loss of each fold.
breeds_df = pd.read_csv(r'/content/drive/MyDrive/Data/sample_submission.csv' , engine="python")
breed_names = breeds_df.columns[1:121]
def Model_Name(round):
return "Model_{}.h5".format(round)
def fit_model_5Fold(num_of_model,input_size,augmentation=False):
main_dir = r"/content/drive/MyDrive/Data/"
save_dir = r"/content/drive/MyDrive/Data/Models/"
lables_df = pd.read_csv("/content/drive/MyDrive/Data/labels.csv" , engine="python")
train_df = lables_df[['breed']]
train_df['id'] = lables_df['id'] + '.jpg'
Y = train_df[['breed']]
kfold_model = KFold(n_splits=5)
stratified_kfold = StratifiedKFold(n_splits=5 , shuffle = True , random_state = 4)
datagen=None
if augmentation:
datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest'
)
else:
datagen = ImageDataGenerator()
round_kfold = 1
VALIDATION_ACCURACY = []
VALIDAITON_LOSS = []
for train_index, val_index in kfold_model.split(np.zeros(len(Y)),Y):
print("Start FOLD Number {}".format(round_kfold))
training_data = train_df.iloc[train_index]
validation_data = train_df.iloc[val_index]
train_data_generator = datagen.flow_from_dataframe(training_data, directory=os.path.join(main_dir,'train'),
x_col = "id", y_col = "breed",
target_size=input_size,
color_mode="rgb",
batch_size=32,
class_mode = "categorical", shuffle = True)
valid_data_generator = datagen.flow_from_dataframe(validation_data, directory=os.path.join(main_dir,'train'),
x_col = "id", y_col = "breed",
target_size=input_size,
color_mode="rgb",
batch_size=32,
class_mode = "categorical", shuffle = True)
cp =tf.keras.callbacks.ModelCheckpoint(os.path.join(save_dir,Model_Name(round_kfold)), monitor='val_accuracy',
verbose=1, save_best_only=True, mode='max')
es = tf.keras.callbacks.EarlyStopping(patience=10,monitor='val_accuracy')
callbacks_list=[cp,es]
if num_of_model==1:
model=Get_first_model()
elif num_of_model==2:
model=get_Second_Model()
elif num_of_model==3:
model=get_third_model()
else:
model=get_model_for_augmentation()
if round_kfold==1:
model.summary()
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
history = model.fit(train_data_generator, epochs=30, callbacks=callbacks_list,validation_data=valid_data_generator)
fig, ax = plt.subplots(1,2,figsize=(12,4))
ax[0].plot(history.history['accuracy'] , color='red')
ax[0].plot(history.history['val_accuracy'] , color='green')
ax[0].set_title('Model Accuracy')
ax[0].set_ylabel('Accuracy')
ax[0].set_xlabel('Epoch')
ax[0].legend(['Train', 'Validation'], loc='upper left' )
ax[1].plot(history.history['loss'] , color='red')
ax[1].plot(history.history['val_loss'] , color='green')
ax[1].set_title('Model Loss')
ax[1].set_ylabel('Loss')
ax[1].set_xlabel('Epoch')
ax[1].legend(['Train', 'Validation'], loc='upper left')
plt.show()
plt.savefig('fold{}.png'.format(round_kfold))
model.load_weights(save_dir + Model_Name(round_kfold))
results = model.evaluate(valid_data_generator)
results = dict(zip(model.metrics_names,results))
VALIDATION_ACCURACY.append(results['accuracy'])
VALIDAITON_LOSS.append(results['loss'])
tf.keras.backend.clear_session()
round_kfold += 1
index = 0
min_loss = np.min(VALIDAITON_LOSS)
for i in range(5):
if VALIDAITON_LOSS[i] == min_loss:
index = i
break
model.load_weights(save_dir + Model_Name(index+1))
return model
The basic model. the Input shape of this model is picture of 375X375X3 this model contain 2 blocks that each block contain convolutaion layer, MaxPool layer and DropOut layer. at the end of those 2 blocks, the model has the flatten layers. The output layer is Dense layer with "softmax" activation function and 120 kernel that represent 120 diffrent dog breeds.
def Get_first_model():
first_model = Sequential()
first_model.add(Conv2D(32, (3, 3),activation='relu' , input_shape=(375, 375, 3)))
first_model.add(MaxPool2D(pool_size=(2, 2)))
first_model.add(Dropout(0.2))
first_model.add(Conv2D(64, (3, 3),activation='relu'))
first_model.add(MaxPool2D(pool_size=(2, 2)))
first_model.add(Dropout(0.2))
first_model.add(Flatten())
first_model.add(Dense(120,activation='softmax'))
return first_model
first_model= fit_model_5Fold(1 , (375,375))
Start FOLD Number 1 Found 8177 validated image filenames belonging to 120 classes. Found 2045 validated image filenames belonging to 120 classes. Model: "sequential_3" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_6 (Conv2D) (None, 373, 373, 32) 896 _________________________________________________________________ max_pooling2d_6 (MaxPooling2 (None, 186, 186, 32) 0 _________________________________________________________________ dropout_6 (Dropout) (None, 186, 186, 32) 0 _________________________________________________________________ conv2d_7 (Conv2D) (None, 184, 184, 64) 18496 _________________________________________________________________ max_pooling2d_7 (MaxPooling2 (None, 92, 92, 64) 0 _________________________________________________________________ dropout_7 (Dropout) (None, 92, 92, 64) 0 _________________________________________________________________ flatten_3 (Flatten) (None, 541696) 0 _________________________________________________________________ dense_3 (Dense) (None, 120) 65003640 ================================================================= Total params: 65,023,032 Trainable params: 65,023,032 Non-trainable params: 0 _________________________________________________________________ Epoch 1/30 256/256 [==============================] - ETA: 0s - loss: 246.2587 - accuracy: 0.0103 Epoch 00001: val_accuracy improved from -inf to 0.01076, saving model to /content/drive/MyDrive/dog breed/Models/Model_1.h5 256/256 [==============================] - 70s 272ms/step - loss: 246.2587 - accuracy: 0.0103 - val_loss: 4.7855 - val_accuracy: 0.0108 Epoch 2/30 256/256 [==============================] - ETA: 0s - loss: 4.5423 - accuracy: 0.0775 Epoch 00002: val_accuracy improved from 0.01076 to 0.01222, saving model to /content/drive/MyDrive/dog breed/Models/Model_1.h5 256/256 [==============================] - 71s 277ms/step - loss: 4.5423 - accuracy: 0.0775 - val_loss: 4.8629 - val_accuracy: 0.0122 Epoch 3/30 256/256 [==============================] - ETA: 0s - loss: 3.3960 - accuracy: 0.2945 Epoch 00003: val_accuracy improved from 0.01222 to 0.01418, saving model to /content/drive/MyDrive/dog breed/Models/Model_1.h5 256/256 [==============================] - 71s 277ms/step - loss: 3.3960 - accuracy: 0.2945 - val_loss: 6.2823 - val_accuracy: 0.0142 Epoch 4/30 256/256 [==============================] - ETA: 0s - loss: 2.2022 - accuracy: 0.5398 Epoch 00004: val_accuracy improved from 0.01418 to 0.01516, saving model to /content/drive/MyDrive/dog breed/Models/Model_1.h5 256/256 [==============================] - 71s 278ms/step - loss: 2.2022 - accuracy: 0.5398 - val_loss: 9.1774 - val_accuracy: 0.0152 Epoch 5/30 256/256 [==============================] - ETA: 0s - loss: 1.3882 - accuracy: 0.7043 Epoch 00005: val_accuracy improved from 0.01516 to 0.01663, saving model to /content/drive/MyDrive/dog breed/Models/Model_1.h5 256/256 [==============================] - 73s 284ms/step - loss: 1.3882 - accuracy: 0.7043 - val_loss: 13.6056 - val_accuracy: 0.0166 Epoch 6/30 256/256 [==============================] - ETA: 0s - loss: 1.0138 - accuracy: 0.7928 Epoch 00006: val_accuracy improved from 0.01663 to 0.01760, saving model to /content/drive/MyDrive/dog breed/Models/Model_1.h5 256/256 [==============================] - 73s 284ms/step - loss: 1.0138 - accuracy: 0.7928 - val_loss: 16.7277 - val_accuracy: 0.0176 Epoch 7/30 256/256 [==============================] - ETA: 0s - loss: 0.6977 - accuracy: 0.8624 Epoch 00007: val_accuracy did not improve from 0.01760 256/256 [==============================] - 68s 267ms/step - loss: 0.6977 - accuracy: 0.8624 - val_loss: 17.9954 - val_accuracy: 0.0166 Epoch 8/30 256/256 [==============================] - ETA: 0s - loss: 0.5658 - accuracy: 0.8880 Epoch 00008: val_accuracy did not improve from 0.01760 256/256 [==============================] - 67s 260ms/step - loss: 0.5658 - accuracy: 0.8880 - val_loss: 19.9053 - val_accuracy: 0.0176 Epoch 9/30 256/256 [==============================] - ETA: 0s - loss: 0.3912 - accuracy: 0.9222 Epoch 00009: val_accuracy improved from 0.01760 to 0.01956, saving model to /content/drive/MyDrive/dog breed/Models/Model_1.h5 256/256 [==============================] - 70s 272ms/step - loss: 0.3912 - accuracy: 0.9222 - val_loss: 26.2300 - val_accuracy: 0.0196 Epoch 10/30 256/256 [==============================] - ETA: 0s - loss: 0.3670 - accuracy: 0.9329 Epoch 00010: val_accuracy did not improve from 0.01956 256/256 [==============================] - 68s 265ms/step - loss: 0.3670 - accuracy: 0.9329 - val_loss: 24.3384 - val_accuracy: 0.0191 Epoch 11/30 256/256 [==============================] - ETA: 0s - loss: 0.2809 - accuracy: 0.9496 Epoch 00011: val_accuracy improved from 0.01956 to 0.02152, saving model to /content/drive/MyDrive/dog breed/Models/Model_1.h5 256/256 [==============================] - 70s 272ms/step - loss: 0.2809 - accuracy: 0.9496 - val_loss: 23.7614 - val_accuracy: 0.0215 Epoch 12/30 256/256 [==============================] - ETA: 0s - loss: 0.2338 - accuracy: 0.9588 Epoch 00012: val_accuracy did not improve from 0.02152 256/256 [==============================] - 67s 262ms/step - loss: 0.2338 - accuracy: 0.9588 - val_loss: 29.6907 - val_accuracy: 0.0215 Epoch 13/30 256/256 [==============================] - ETA: 0s - loss: 0.2340 - accuracy: 0.9637 Epoch 00013: val_accuracy improved from 0.02152 to 0.02592, saving model to /content/drive/MyDrive/dog breed/Models/Model_1.h5 256/256 [==============================] - 70s 272ms/step - loss: 0.2340 - accuracy: 0.9637 - val_loss: 31.3156 - val_accuracy: 0.0259 Epoch 14/30 256/256 [==============================] - ETA: 0s - loss: 0.2135 - accuracy: 0.9667 Epoch 00014: val_accuracy did not improve from 0.02592 256/256 [==============================] - 67s 263ms/step - loss: 0.2135 - accuracy: 0.9667 - val_loss: 34.5494 - val_accuracy: 0.0196 Epoch 15/30 256/256 [==============================] - ETA: 0s - loss: 0.1802 - accuracy: 0.9715 Epoch 00015: val_accuracy did not improve from 0.02592 256/256 [==============================] - 67s 260ms/step - loss: 0.1802 - accuracy: 0.9715 - val_loss: 29.8963 - val_accuracy: 0.0230 Epoch 16/30 256/256 [==============================] - ETA: 0s - loss: 0.1761 - accuracy: 0.9740 Epoch 00016: val_accuracy did not improve from 0.02592 256/256 [==============================] - 66s 258ms/step - loss: 0.1761 - accuracy: 0.9740 - val_loss: 32.4627 - val_accuracy: 0.0230 Epoch 17/30 256/256 [==============================] - ETA: 0s - loss: 0.3330 - accuracy: 0.9633 Epoch 00017: val_accuracy did not improve from 0.02592 256/256 [==============================] - 67s 260ms/step - loss: 0.3330 - accuracy: 0.9633 - val_loss: 31.5631 - val_accuracy: 0.0200 Epoch 18/30 256/256 [==============================] - ETA: 0s - loss: 0.1925 - accuracy: 0.9749 Epoch 00018: val_accuracy did not improve from 0.02592 256/256 [==============================] - 67s 261ms/step - loss: 0.1925 - accuracy: 0.9749 - val_loss: 29.6409 - val_accuracy: 0.0196 Epoch 19/30 256/256 [==============================] - ETA: 0s - loss: 0.1686 - accuracy: 0.9787 Epoch 00019: val_accuracy did not improve from 0.02592 256/256 [==============================] - 67s 261ms/step - loss: 0.1686 - accuracy: 0.9787 - val_loss: 27.4613 - val_accuracy: 0.0215 Epoch 20/30 256/256 [==============================] - ETA: 0s - loss: 0.1327 - accuracy: 0.9806 Epoch 00020: val_accuracy did not improve from 0.02592 256/256 [==============================] - 67s 263ms/step - loss: 0.1327 - accuracy: 0.9806 - val_loss: 33.2246 - val_accuracy: 0.0196 Epoch 21/30 256/256 [==============================] - ETA: 0s - loss: 0.1230 - accuracy: 0.9851 Epoch 00021: val_accuracy did not improve from 0.02592 256/256 [==============================] - 66s 259ms/step - loss: 0.1230 - accuracy: 0.9851 - val_loss: 33.5988 - val_accuracy: 0.0220 Epoch 22/30 256/256 [==============================] - ETA: 0s - loss: 0.1319 - accuracy: 0.9839 Epoch 00022: val_accuracy did not improve from 0.02592 256/256 [==============================] - 67s 260ms/step - loss: 0.1319 - accuracy: 0.9839 - val_loss: 33.5169 - val_accuracy: 0.0200 Epoch 23/30 256/256 [==============================] - ETA: 0s - loss: 0.0967 - accuracy: 0.9867 Epoch 00023: val_accuracy did not improve from 0.02592 256/256 [==============================] - 66s 259ms/step - loss: 0.0967 - accuracy: 0.9867 - val_loss: 37.5560 - val_accuracy: 0.0210
64/64 [==============================] - 12s 180ms/step - loss: 31.3156 - accuracy: 0.0259 Start FOLD Number 2 Found 8177 validated image filenames belonging to 120 classes. Found 2045 validated image filenames belonging to 120 classes. Epoch 1/30 256/256 [==============================] - ETA: 0s - loss: 288.7956 - accuracy: 0.0119 Epoch 00001: val_accuracy improved from -inf to 0.00831, saving model to /content/drive/MyDrive/dog breed/Models/Model_2.h5 256/256 [==============================] - 70s 271ms/step - loss: 288.7956 - accuracy: 0.0119 - val_loss: 4.7860 - val_accuracy: 0.0083 Epoch 2/30 256/256 [==============================] - ETA: 0s - loss: 4.7611 - accuracy: 0.0235 Epoch 00002: val_accuracy improved from 0.00831 to 0.01271, saving model to /content/drive/MyDrive/dog breed/Models/Model_2.h5 256/256 [==============================] - 70s 275ms/step - loss: 4.7611 - accuracy: 0.0235 - val_loss: 4.8509 - val_accuracy: 0.0127 Epoch 3/30 256/256 [==============================] - ETA: 0s - loss: 4.4919 - accuracy: 0.0934 Epoch 00003: val_accuracy did not improve from 0.01271 256/256 [==============================] - 67s 262ms/step - loss: 4.4919 - accuracy: 0.0934 - val_loss: 4.8835 - val_accuracy: 0.0103 Epoch 4/30 256/256 [==============================] - ETA: 0s - loss: 3.5000 - accuracy: 0.2700 Epoch 00004: val_accuracy improved from 0.01271 to 0.01418, saving model to /content/drive/MyDrive/dog breed/Models/Model_2.h5 256/256 [==============================] - 69s 270ms/step - loss: 3.5000 - accuracy: 0.2700 - val_loss: 5.9795 - val_accuracy: 0.0142 Epoch 5/30 256/256 [==============================] - ETA: 0s - loss: 2.5478 - accuracy: 0.4529 Epoch 00005: val_accuracy improved from 0.01418 to 0.01516, saving model to /content/drive/MyDrive/dog breed/Models/Model_2.h5 256/256 [==============================] - 71s 277ms/step - loss: 2.5478 - accuracy: 0.4529 - val_loss: 8.9770 - val_accuracy: 0.0152 Epoch 6/30 256/256 [==============================] - ETA: 0s - loss: 1.9360 - accuracy: 0.5852 Epoch 00006: val_accuracy improved from 0.01516 to 0.01760, saving model to /content/drive/MyDrive/dog breed/Models/Model_2.h5 256/256 [==============================] - 71s 278ms/step - loss: 1.9360 - accuracy: 0.5852 - val_loss: 10.0181 - val_accuracy: 0.0176 Epoch 7/30 256/256 [==============================] - ETA: 0s - loss: 1.5259 - accuracy: 0.6765 Epoch 00007: val_accuracy improved from 0.01760 to 0.02103, saving model to /content/drive/MyDrive/dog breed/Models/Model_2.h5 256/256 [==============================] - 71s 277ms/step - loss: 1.5259 - accuracy: 0.6765 - val_loss: 12.0152 - val_accuracy: 0.0210 Epoch 8/30 256/256 [==============================] - ETA: 0s - loss: 1.2294 - accuracy: 0.7439 Epoch 00008: val_accuracy did not improve from 0.02103 256/256 [==============================] - 68s 267ms/step - loss: 1.2294 - accuracy: 0.7439 - val_loss: 16.9869 - val_accuracy: 0.0200 Epoch 9/30 256/256 [==============================] - ETA: 0s - loss: 0.9640 - accuracy: 0.7923 Epoch 00009: val_accuracy improved from 0.02103 to 0.02152, saving model to /content/drive/MyDrive/dog breed/Models/Model_2.h5 256/256 [==============================] - 70s 275ms/step - loss: 0.9640 - accuracy: 0.7923 - val_loss: 19.5069 - val_accuracy: 0.0215 Epoch 10/30 256/256 [==============================] - ETA: 0s - loss: 0.8050 - accuracy: 0.8279 Epoch 00010: val_accuracy improved from 0.02152 to 0.02249, saving model to /content/drive/MyDrive/dog breed/Models/Model_2.h5 256/256 [==============================] - 72s 283ms/step - loss: 0.8050 - accuracy: 0.8279 - val_loss: 21.8176 - val_accuracy: 0.0225 Epoch 11/30 256/256 [==============================] - ETA: 0s - loss: 0.7274 - accuracy: 0.8562 Epoch 00011: val_accuracy did not improve from 0.02249 256/256 [==============================] - 69s 269ms/step - loss: 0.7274 - accuracy: 0.8562 - val_loss: 21.7518 - val_accuracy: 0.0205 Epoch 12/30 256/256 [==============================] - ETA: 0s - loss: 0.6071 - accuracy: 0.8778 Epoch 00012: val_accuracy improved from 0.02249 to 0.02298, saving model to /content/drive/MyDrive/dog breed/Models/Model_2.h5 256/256 [==============================] - 70s 274ms/step - loss: 0.6071 - accuracy: 0.8778 - val_loss: 24.8888 - val_accuracy: 0.0230 Epoch 13/30 256/256 [==============================] - ETA: 0s - loss: 0.5877 - accuracy: 0.8826 Epoch 00013: val_accuracy improved from 0.02298 to 0.02347, saving model to /content/drive/MyDrive/dog breed/Models/Model_2.h5 256/256 [==============================] - 71s 277ms/step - loss: 0.5877 - accuracy: 0.8826 - val_loss: 30.3484 - val_accuracy: 0.0235 Epoch 14/30 256/256 [==============================] - ETA: 0s - loss: 0.4796 - accuracy: 0.9055 Epoch 00014: val_accuracy did not improve from 0.02347 256/256 [==============================] - 69s 269ms/step - loss: 0.4796 - accuracy: 0.9055 - val_loss: 32.3779 - val_accuracy: 0.0215 Epoch 15/30 256/256 [==============================] - ETA: 0s - loss: 0.4246 - accuracy: 0.9166 Epoch 00015: val_accuracy did not improve from 0.02347 256/256 [==============================] - 67s 260ms/step - loss: 0.4246 - accuracy: 0.9166 - val_loss: 34.3728 - val_accuracy: 0.0210 Epoch 16/30 256/256 [==============================] - ETA: 0s - loss: 0.3800 - accuracy: 0.9278 Epoch 00016: val_accuracy improved from 0.02347 to 0.02543, saving model to /content/drive/MyDrive/dog breed/Models/Model_2.h5 256/256 [==============================] - 69s 271ms/step - loss: 0.3800 - accuracy: 0.9278 - val_loss: 36.8796 - val_accuracy: 0.0254 Epoch 17/30 256/256 [==============================] - ETA: 0s - loss: 0.3315 - accuracy: 0.9375 Epoch 00017: val_accuracy did not improve from 0.02543 256/256 [==============================] - 68s 265ms/step - loss: 0.3315 - accuracy: 0.9375 - val_loss: 29.7448 - val_accuracy: 0.0230 Epoch 18/30 256/256 [==============================] - ETA: 0s - loss: 0.3166 - accuracy: 0.9407 Epoch 00018: val_accuracy did not improve from 0.02543 256/256 [==============================] - 66s 258ms/step - loss: 0.3166 - accuracy: 0.9407 - val_loss: 27.4435 - val_accuracy: 0.0225 Epoch 19/30 256/256 [==============================] - ETA: 0s - loss: 0.2943 - accuracy: 0.9497 Epoch 00019: val_accuracy did not improve from 0.02543 256/256 [==============================] - 66s 260ms/step - loss: 0.2943 - accuracy: 0.9497 - val_loss: 30.7191 - val_accuracy: 0.0220 Epoch 20/30 256/256 [==============================] - ETA: 0s - loss: 0.2682 - accuracy: 0.9534Epoch 21/30 256/256 [==============================] - ETA: 0s - loss: 0.2021 - accuracy: 0.9626 Epoch 00021: val_accuracy did not improve from 0.02543 256/256 [==============================] - 66s 260ms/step - loss: 0.2021 - accuracy: 0.9626 - val_loss: 35.9785 - val_accuracy: 0.0235 Epoch 22/30 256/256 [==============================] - ETA: 0s - loss: 0.2484 - accuracy: 0.9632 Epoch 00022: val_accuracy did not improve from 0.02543 256/256 [==============================] - 67s 261ms/step - loss: 0.2484 - accuracy: 0.9632 - val_loss: 40.2331 - val_accuracy: 0.0235 Epoch 23/30 256/256 [==============================] - ETA: 0s - loss: 0.2337 - accuracy: 0.9631 Epoch 00023: val_accuracy did not improve from 0.02543 256/256 [==============================] - 68s 264ms/step - loss: 0.2337 - accuracy: 0.9631 - val_loss: 34.6583 - val_accuracy: 0.0196 Epoch 24/30 256/256 [==============================] - ETA: 0s - loss: 0.2104 - accuracy: 0.9655 Epoch 00024: val_accuracy did not improve from 0.02543 256/256 [==============================] - 68s 266ms/step - loss: 0.2104 - accuracy: 0.9655 - val_loss: 42.6345 - val_accuracy: 0.0235 Epoch 25/30 256/256 [==============================] - ETA: 0s - loss: 0.1672 - accuracy: 0.9741 Epoch 00025: val_accuracy did not improve from 0.02543 256/256 [==============================] - 67s 261ms/step - loss: 0.1672 - accuracy: 0.9741 - val_loss: 43.3122 - val_accuracy: 0.0249 Epoch 26/30 256/256 [==============================] - ETA: 0s - loss: 0.1817 - accuracy: 0.9719 Epoch 00026: val_accuracy did not improve from 0.02543 256/256 [==============================] - 67s 261ms/step - loss: 0.1817 - accuracy: 0.9719 - val_loss: 37.0894 - val_accuracy: 0.0249
<Figure size 432x288 with 0 Axes>
64/64 [==============================] - 12s 180ms/step - loss: 36.8796 - accuracy: 0.0254 Start FOLD Number 3 Found 8178 validated image filenames belonging to 120 classes. Found 2044 validated image filenames belonging to 120 classes. Epoch 1/30 256/256 [==============================] - ETA: 0s - loss: 243.3239 - accuracy: 0.0115 Epoch 00001: val_accuracy improved from -inf to 0.00832, saving model to /content/drive/MyDrive/dog breed/Models/Model_3.h5 256/256 [==============================] - 69s 270ms/step - loss: 243.3239 - accuracy: 0.0115 - val_loss: 4.7857 - val_accuracy: 0.0083 Epoch 2/30 256/256 [==============================] - ETA: 0s - loss: 4.6712 - accuracy: 0.0479 Epoch 00002: val_accuracy improved from 0.00832 to 0.01125, saving model to /content/drive/MyDrive/dog breed/Models/Model_3.h5 256/256 [==============================] - 71s 277ms/step - loss: 4.6712 - accuracy: 0.0479 - val_loss: 4.7962 - val_accuracy: 0.0113 Epoch 3/30 256/256 [==============================] - ETA: 0s - loss: 3.7522 - accuracy: 0.2306 Epoch 00003: val_accuracy improved from 0.01125 to 0.01321, saving model to /content/drive/MyDrive/dog breed/Models/Model_3.h5 256/256 [==============================] - 70s 274ms/step - loss: 3.7522 - accuracy: 0.2306 - val_loss: 5.4334 - val_accuracy: 0.0132 Epoch 4/30 256/256 [==============================] - ETA: 0s - loss: 2.3018 - accuracy: 0.5039 Epoch 00004: val_accuracy did not improve from 0.01321 256/256 [==============================] - 67s 263ms/step - loss: 2.3018 - accuracy: 0.5039 - val_loss: 7.0294 - val_accuracy: 0.0122 Epoch 5/30 256/256 [==============================] - ETA: 0s - loss: 1.3335 - accuracy: 0.7124 Epoch 00005: val_accuracy improved from 0.01321 to 0.01517, saving model to /content/drive/MyDrive/dog breed/Models/Model_3.h5 256/256 [==============================] - 69s 270ms/step - loss: 1.3335 - accuracy: 0.7124 - val_loss: 11.3413 - val_accuracy: 0.0152 Epoch 6/30 256/256 [==============================] - ETA: 0s - loss: 0.7593 - accuracy: 0.8391 Epoch 00006: val_accuracy did not improve from 0.01517 256/256 [==============================] - 68s 265ms/step - loss: 0.7593 - accuracy: 0.8391 - val_loss: 14.1093 - val_accuracy: 0.0127 Epoch 7/30 256/256 [==============================] - ETA: 0s - loss: 0.3964 - accuracy: 0.9133 Epoch 00007: val_accuracy improved from 0.01517 to 0.01614, saving model to /content/drive/MyDrive/dog breed/Models/Model_3.h5 256/256 [==============================] - 70s 273ms/step - loss: 0.3964 - accuracy: 0.9133 - val_loss: 19.5519 - val_accuracy: 0.0161 Epoch 8/30 256/256 [==============================] - ETA: 0s - loss: 0.2499 - accuracy: 0.9496 Epoch 00008: val_accuracy improved from 0.01614 to 0.01712, saving model to /content/drive/MyDrive/dog breed/Models/Model_3.h5 256/256 [==============================] - 71s 278ms/step - loss: 0.2499 - accuracy: 0.9496 - val_loss: 22.1509 - val_accuracy: 0.0171 Epoch 9/30 256/256 [==============================] - ETA: 0s - loss: 0.1483 - accuracy: 0.9685 Epoch 00009: val_accuracy did not improve from 0.01712 256/256 [==============================] - 69s 269ms/step - loss: 0.1483 - accuracy: 0.9685 - val_loss: 26.1136 - val_accuracy: 0.0166 Epoch 10/30 256/256 [==============================] - ETA: 0s - loss: 0.1381 - accuracy: 0.9771 Epoch 00010: val_accuracy improved from 0.01712 to 0.01859, saving model to /content/drive/MyDrive/dog breed/Models/Model_3.h5 256/256 [==============================] - 72s 279ms/step - loss: 0.1381 - accuracy: 0.9771 - val_loss: 26.6973 - val_accuracy: 0.0186 Epoch 11/30 256/256 [==============================] - ETA: 0s - loss: 0.1435 - accuracy: 0.9779 Epoch 00011: val_accuracy did not improve from 0.01859 256/256 [==============================] - 69s 269ms/step - loss: 0.1435 - accuracy: 0.9779 - val_loss: 26.4960 - val_accuracy: 0.0176 Epoch 12/30 256/256 [==============================] - ETA: 0s - loss: 0.1016 - accuracy: 0.9863 Epoch 00012: val_accuracy improved from 0.01859 to 0.01908, saving model to /content/drive/MyDrive/dog breed/Models/Model_3.h5 256/256 [==============================] - 71s 276ms/step - loss: 0.1016 - accuracy: 0.9863 - val_loss: 28.4820 - val_accuracy: 0.0191 Epoch 13/30 256/256 [==============================] - ETA: 0s - loss: 0.1818 - accuracy: 0.9875 Epoch 00013: val_accuracy did not improve from 0.01908 256/256 [==============================] - 68s 267ms/step - loss: 0.1818 - accuracy: 0.9875 - val_loss: 27.8885 - val_accuracy: 0.0137 Epoch 14/30 256/256 [==============================] - ETA: 0s - loss: 0.0859 - accuracy: 0.9870 Epoch 00014: val_accuracy did not improve from 0.01908 256/256 [==============================] - 66s 258ms/step - loss: 0.0859 - accuracy: 0.9870 - val_loss: 28.4643 - val_accuracy: 0.0161 Epoch 15/30 256/256 [==============================] - ETA: 0s - loss: 0.0592 - accuracy: 0.9910 Epoch 00015: val_accuracy did not improve from 0.01908 256/256 [==============================] - 67s 260ms/step - loss: 0.0592 - accuracy: 0.9910 - val_loss: 24.7326 - val_accuracy: 0.0176 Epoch 16/30 256/256 [==============================] - ETA: 0s - loss: 0.1389 - accuracy: 0.9889 Epoch 00016: val_accuracy did not improve from 0.01908 256/256 [==============================] - 66s 259ms/step - loss: 0.1389 - accuracy: 0.9889 - val_loss: 24.8615 - val_accuracy: 0.0186 Epoch 17/30 256/256 [==============================] - ETA: 0s - loss: 0.0905 - accuracy: 0.9906 Epoch 00017: val_accuracy did not improve from 0.01908 256/256 [==============================] - 66s 256ms/step - loss: 0.0905 - accuracy: 0.9906 - val_loss: 29.6731 - val_accuracy: 0.0191 Epoch 18/30 256/256 [==============================] - ETA: 0s - loss: 0.0486 - accuracy: 0.9945 Epoch 00018: val_accuracy did not improve from 0.01908 256/256 [==============================] - 65s 253ms/step - loss: 0.0486 - accuracy: 0.9945 - val_loss: 33.1317 - val_accuracy: 0.0186 Epoch 19/30 256/256 [==============================] - ETA: 0s - loss: 0.0450 - accuracy: 0.9945 Epoch 00019: val_accuracy did not improve from 0.01908 256/256 [==============================] - 65s 253ms/step - loss: 0.0450 - accuracy: 0.9945 - val_loss: 25.6365 - val_accuracy: 0.0186 Epoch 20/30 256/256 [==============================] - ETA: 0s - loss: 0.1069 - accuracy: 0.9897 Epoch 00020: val_accuracy did not improve from 0.01908 256/256 [==============================] - 66s 256ms/step - loss: 0.1069 - accuracy: 0.9897 - val_loss: 28.7759 - val_accuracy: 0.0176 Epoch 21/30 256/256 [==============================] - ETA: 0s - loss: 0.0416 - accuracy: 0.9954 Epoch 00021: val_accuracy improved from 0.01908 to 0.02202, saving model to /content/drive/MyDrive/dog breed/Models/Model_3.h5 256/256 [==============================] - 69s 270ms/step - loss: 0.0416 - accuracy: 0.9954 - val_loss: 26.2796 - val_accuracy: 0.0220 Epoch 22/30 256/256 [==============================] - ETA: 0s - loss: 0.0414 - accuracy: 0.9956 Epoch 00022: val_accuracy did not improve from 0.02202 256/256 [==============================] - 69s 269ms/step - loss: 0.0414 - accuracy: 0.9956 - val_loss: 28.1733 - val_accuracy: 0.0181 Epoch 23/30 256/256 [==============================] - ETA: 0s - loss: 0.0445 - accuracy: 0.9957 Epoch 00023: val_accuracy did not improve from 0.02202 256/256 [==============================] - 67s 262ms/step - loss: 0.0445 - accuracy: 0.9957 - val_loss: 33.1236 - val_accuracy: 0.0176 Epoch 24/30 256/256 [==============================] - ETA: 0s - loss: 0.0535 - accuracy: 0.9957 Epoch 00024: val_accuracy did not improve from 0.02202 256/256 [==============================] - 67s 263ms/step - loss: 0.0535 - accuracy: 0.9957 - val_loss: 28.7305 - val_accuracy: 0.0210 Epoch 25/30 256/256 [==============================] - ETA: 0s - loss: 0.0415 - accuracy: 0.9957 Epoch 00025: val_accuracy improved from 0.02202 to 0.02299, saving model to /content/drive/MyDrive/dog breed/Models/Model_3.h5 256/256 [==============================] - 71s 277ms/step - loss: 0.0415 - accuracy: 0.9957 - val_loss: 36.0430 - val_accuracy: 0.0230 Epoch 26/30 256/256 [==============================] - ETA: 0s - loss: 0.0270 - accuracy: 0.9980 Epoch 00026: val_accuracy did not improve from 0.02299 256/256 [==============================] - 69s 268ms/step - loss: 0.0270 - accuracy: 0.9980 - val_loss: 33.6734 - val_accuracy: 0.0186 Epoch 27/30 256/256 [==============================] - ETA: 0s - loss: 0.0276 - accuracy: 0.9967 Epoch 00027: val_accuracy did not improve from 0.02299 256/256 [==============================] - 66s 259ms/step - loss: 0.0276 - accuracy: 0.9967 - val_loss: 39.4886 - val_accuracy: 0.0210 Epoch 28/30 256/256 [==============================] - ETA: 0s - loss: 0.0334 - accuracy: 0.9968 Epoch 00028: val_accuracy did not improve from 0.02299 256/256 [==============================] - 66s 259ms/step - loss: 0.0334 - accuracy: 0.9968 - val_loss: 37.6065 - val_accuracy: 0.0205 Epoch 29/30 256/256 [==============================] - ETA: 0s - loss: 0.0350 - accuracy: 0.9974 Epoch 00029: val_accuracy did not improve from 0.02299 256/256 [==============================] - 67s 261ms/step - loss: 0.0350 - accuracy: 0.9974 - val_loss: 42.6051 - val_accuracy: 0.0181 Epoch 30/30 256/256 [==============================] - ETA: 0s - loss: 0.0649 - accuracy: 0.9919 Epoch 00030: val_accuracy did not improve from 0.02299 256/256 [==============================] - 66s 258ms/step - loss: 0.0649 - accuracy: 0.9919 - val_loss: 36.0360 - val_accuracy: 0.0176
<Figure size 432x288 with 0 Axes>
64/64 [==============================] - 12s 184ms/step - loss: 36.0430 - accuracy: 0.0230 Start FOLD Number 4 Found 8178 validated image filenames belonging to 120 classes. Found 2044 validated image filenames belonging to 120 classes. Epoch 1/30 256/256 [==============================] - ETA: 0s - loss: 492.9103 - accuracy: 0.0114 Epoch 00001: val_accuracy improved from -inf to 0.01223, saving model to /content/drive/MyDrive/dog breed/Models/Model_4.h5 256/256 [==============================] - 69s 270ms/step - loss: 492.9103 - accuracy: 0.0114 - val_loss: 4.7859 - val_accuracy: 0.0122 Epoch 2/30 256/256 [==============================] - ETA: 0s - loss: 4.6893 - accuracy: 0.0466 Epoch 00002: val_accuracy improved from 0.01223 to 0.01272, saving model to /content/drive/MyDrive/dog breed/Models/Model_4.h5 256/256 [==============================] - 70s 275ms/step - loss: 4.6893 - accuracy: 0.0466 - val_loss: 4.7990 - val_accuracy: 0.0127 Epoch 3/30 256/256 [==============================] - ETA: 0s - loss: 3.9831 - accuracy: 0.1890 Epoch 00003: val_accuracy did not improve from 0.01272 256/256 [==============================] - 68s 264ms/step - loss: 3.9831 - accuracy: 0.1890 - val_loss: 5.3129 - val_accuracy: 0.0122 Epoch 4/30 256/256 [==============================] - ETA: 0s - loss: 2.7502 - accuracy: 0.4293 Epoch 00004: val_accuracy improved from 0.01272 to 0.01419, saving model to /content/drive/MyDrive/dog breed/Models/Model_4.h5 256/256 [==============================] - 70s 273ms/step - loss: 2.7502 - accuracy: 0.4293 - val_loss: 6.5250 - val_accuracy: 0.0142 Epoch 5/30 256/256 [==============================] - ETA: 0s - loss: 1.7913 - accuracy: 0.6211 Epoch 00005: val_accuracy did not improve from 0.01419 256/256 [==============================] - 69s 268ms/step - loss: 1.7913 - accuracy: 0.6211 - val_loss: 9.0728 - val_accuracy: 0.0137 Epoch 6/30 256/256 [==============================] - ETA: 0s - loss: 1.2364 - accuracy: 0.7436 Epoch 00006: val_accuracy improved from 0.01419 to 0.01712, saving model to /content/drive/MyDrive/dog breed/Models/Model_4.h5 256/256 [==============================] - 70s 275ms/step - loss: 1.2364 - accuracy: 0.7436 - val_loss: 12.8810 - val_accuracy: 0.0171 Epoch 7/30 256/256 [==============================] - ETA: 0s - loss: 0.8014 - accuracy: 0.8228 Epoch 00007: val_accuracy did not improve from 0.01712 256/256 [==============================] - 69s 270ms/step - loss: 0.8014 - accuracy: 0.8228 - val_loss: 16.8787 - val_accuracy: 0.0157 Epoch 8/30 256/256 [==============================] - ETA: 0s - loss: 0.5778 - accuracy: 0.8771 Epoch 00008: val_accuracy improved from 0.01712 to 0.01761, saving model to /content/drive/MyDrive/dog breed/Models/Model_4.h5 256/256 [==============================] - 70s 275ms/step - loss: 0.5778 - accuracy: 0.8771 - val_loss: 18.3595 - val_accuracy: 0.0176 Epoch 9/30 256/256 [==============================] - ETA: 0s - loss: 0.4303 - accuracy: 0.9107 Epoch 00009: val_accuracy improved from 0.01761 to 0.01908, saving model to /content/drive/MyDrive/dog breed/Models/Model_4.h5 256/256 [==============================] - 72s 280ms/step - loss: 0.4303 - accuracy: 0.9107 - val_loss: 23.5828 - val_accuracy: 0.0191 Epoch 10/30 256/256 [==============================] - ETA: 0s - loss: 0.3523 - accuracy: 0.9325 Epoch 00010: val_accuracy improved from 0.01908 to 0.01957, saving model to /content/drive/MyDrive/dog breed/Models/Model_4.h5 256/256 [==============================] - 72s 282ms/step - loss: 0.3523 - accuracy: 0.9325 - val_loss: 25.7922 - val_accuracy: 0.0196 Epoch 11/30 256/256 [==============================] - ETA: 0s - loss: 0.2873 - accuracy: 0.9446 Epoch 00011: val_accuracy did not improve from 0.01957 256/256 [==============================] - 68s 266ms/step - loss: 0.2873 - accuracy: 0.9446 - val_loss: 25.1642 - val_accuracy: 0.0176 Epoch 12/30 256/256 [==============================] - ETA: 0s - loss: 0.2175 - accuracy: 0.9582 Epoch 00012: val_accuracy did not improve from 0.01957 256/256 [==============================] - 67s 261ms/step - loss: 0.2175 - accuracy: 0.9582 - val_loss: 24.1082 - val_accuracy: 0.0191 Epoch 13/30 256/256 [==============================] - ETA: 0s - loss: 0.1894 - accuracy: 0.9642 Epoch 00013: val_accuracy did not improve from 0.01957 256/256 [==============================] - 66s 259ms/step - loss: 0.1894 - accuracy: 0.9642 - val_loss: 28.5650 - val_accuracy: 0.0186 Epoch 14/30 256/256 [==============================] - ETA: 0s - loss: 0.1720 - accuracy: 0.9694 Epoch 00014: val_accuracy did not improve from 0.01957 256/256 [==============================] - 66s 259ms/step - loss: 0.1720 - accuracy: 0.9694 - val_loss: 26.8106 - val_accuracy: 0.0176 Epoch 15/30 256/256 [==============================] - ETA: 0s - loss: 0.1703 - accuracy: 0.9704 Epoch 00015: val_accuracy improved from 0.01957 to 0.02055, saving model to /content/drive/MyDrive/dog breed/Models/Model_4.h5 256/256 [==============================] - 69s 271ms/step - loss: 0.1703 - accuracy: 0.9704 - val_loss: 27.2837 - val_accuracy: 0.0205 Epoch 16/30 256/256 [==============================] - ETA: 0s - loss: 0.1447 - accuracy: 0.9729 Epoch 00016: val_accuracy improved from 0.02055 to 0.02250, saving model to /content/drive/MyDrive/dog breed/Models/Model_4.h5 256/256 [==============================] - 70s 275ms/step - loss: 0.1447 - accuracy: 0.9729 - val_loss: 34.0040 - val_accuracy: 0.0225 Epoch 17/30 256/256 [==============================] - ETA: 0s - loss: 0.1363 - accuracy: 0.9803 Epoch 00017: val_accuracy did not improve from 0.02250 256/256 [==============================] - 68s 266ms/step - loss: 0.1363 - accuracy: 0.9803 - val_loss: 32.4433 - val_accuracy: 0.0176 Epoch 18/30 256/256 [==============================] - ETA: 0s - loss: 0.1538 - accuracy: 0.9780 Epoch 00018: val_accuracy did not improve from 0.02250 256/256 [==============================] - 66s 259ms/step - loss: 0.1538 - accuracy: 0.9780 - val_loss: 29.2845 - val_accuracy: 0.0186 Epoch 19/30 256/256 [==============================] - ETA: 0s - loss: 0.0899 - accuracy: 0.9821 Epoch 00019: val_accuracy did not improve from 0.02250 256/256 [==============================] - 67s 260ms/step - loss: 0.0899 - accuracy: 0.9821 - val_loss: 31.4571 - val_accuracy: 0.0186 Epoch 20/30 256/256 [==============================] - ETA: 0s - loss: 0.0994 - accuracy: 0.9851 Epoch 00020: val_accuracy did not improve from 0.02250 256/256 [==============================] - 68s 264ms/step - loss: 0.0994 - accuracy: 0.9851 - val_loss: 34.6349 - val_accuracy: 0.0186 Epoch 21/30 256/256 [==============================] - ETA: 0s - loss: 0.1097 - accuracy: 0.9847 Epoch 00021: val_accuracy did not improve from 0.02250 256/256 [==============================] - 67s 263ms/step - loss: 0.1097 - accuracy: 0.9847 - val_loss: 29.6234 - val_accuracy: 0.0205 Epoch 22/30 256/256 [==============================] - ETA: 0s - loss: 0.1070 - accuracy: 0.9825 Epoch 00022: val_accuracy did not improve from 0.02250 256/256 [==============================] - 68s 265ms/step - loss: 0.1070 - accuracy: 0.9825 - val_loss: 35.4120 - val_accuracy: 0.0161 Epoch 23/30 256/256 [==============================] - ETA: 0s - loss: 0.0857 - accuracy: 0.9888 Epoch 00023: val_accuracy did not improve from 0.02250 256/256 [==============================] - 68s 264ms/step - loss: 0.0857 - accuracy: 0.9888 - val_loss: 36.3585 - val_accuracy: 0.0181 Epoch 24/30 256/256 [==============================] - ETA: 0s - loss: 0.0612 - accuracy: 0.9911 Epoch 00024: val_accuracy did not improve from 0.02250 256/256 [==============================] - 67s 261ms/step - loss: 0.0612 - accuracy: 0.9911 - val_loss: 35.1961 - val_accuracy: 0.0210 Epoch 25/30 256/256 [==============================] - ETA: 0s - loss: 0.0974 - accuracy: 0.9873 Epoch 00025: val_accuracy did not improve from 0.02250 256/256 [==============================] - 67s 260ms/step - loss: 0.0974 - accuracy: 0.9873 - val_loss: 31.3062 - val_accuracy: 0.0210 Epoch 26/30 256/256 [==============================] - ETA: 0s - loss: 0.0696 - accuracy: 0.9914 Epoch 00026: val_accuracy did not improve from 0.02250 256/256 [==============================] - 67s 262ms/step - loss: 0.0696 - accuracy: 0.9914 - val_loss: 37.7205 - val_accuracy: 0.0201
<Figure size 432x288 with 0 Axes>
64/64 [==============================] - 12s 183ms/step - loss: 34.0040 - accuracy: 0.0225 Start FOLD Number 5 Found 8178 validated image filenames belonging to 120 classes. Found 2044 validated image filenames belonging to 120 classes. Epoch 1/30 256/256 [==============================] - ETA: 0s - loss: 331.5820 - accuracy: 0.0101 Epoch 00001: val_accuracy improved from -inf to 0.01272, saving model to /content/drive/MyDrive/dog breed/Models/Model_5.h5 256/256 [==============================] - 69s 268ms/step - loss: 331.5820 - accuracy: 0.0101 - val_loss: 4.7860 - val_accuracy: 0.0127 Epoch 2/30 256/256 [==============================] - ETA: 0s - loss: 4.5660 - accuracy: 0.0685 Epoch 00002: val_accuracy improved from 0.01272 to 0.01321, saving model to /content/drive/MyDrive/dog breed/Models/Model_5.h5 256/256 [==============================] - 70s 275ms/step - loss: 4.5660 - accuracy: 0.0685 - val_loss: 4.8731 - val_accuracy: 0.0132 Epoch 3/30 256/256 [==============================] - ETA: 0s - loss: 3.3196 - accuracy: 0.3205 Epoch 00003: val_accuracy improved from 0.01321 to 0.01468, saving model to /content/drive/MyDrive/dog breed/Models/Model_5.h5 256/256 [==============================] - 70s 273ms/step - loss: 3.3196 - accuracy: 0.3205 - val_loss: 6.1558 - val_accuracy: 0.0147 Epoch 4/30 256/256 [==============================] - ETA: 0s - loss: 2.0160 - accuracy: 0.5756 Epoch 00004: val_accuracy did not improve from 0.01468 256/256 [==============================] - 68s 266ms/step - loss: 2.0160 - accuracy: 0.5756 - val_loss: 10.6635 - val_accuracy: 0.0132 Epoch 5/30 256/256 [==============================] - ETA: 0s - loss: 1.3488 - accuracy: 0.7262 Epoch 00005: val_accuracy did not improve from 0.01468 256/256 [==============================] - 67s 261ms/step - loss: 1.3488 - accuracy: 0.7262 - val_loss: 14.7035 - val_accuracy: 0.0142 Epoch 6/30 256/256 [==============================] - ETA: 0s - loss: 0.8573 - accuracy: 0.8232 Epoch 00006: val_accuracy did not improve from 0.01468 256/256 [==============================] - 66s 260ms/step - loss: 0.8573 - accuracy: 0.8232 - val_loss: 16.4804 - val_accuracy: 0.0113 Epoch 7/30 256/256 [==============================] - ETA: 0s - loss: 0.6172 - accuracy: 0.8774 Epoch 00007: val_accuracy improved from 0.01468 to 0.01566, saving model to /content/drive/MyDrive/dog breed/Models/Model_5.h5 256/256 [==============================] - 71s 276ms/step - loss: 0.6172 - accuracy: 0.8774 - val_loss: 18.6066 - val_accuracy: 0.0157 Epoch 8/30 256/256 [==============================] - ETA: 0s - loss: 0.5164 - accuracy: 0.9002 Epoch 00008: val_accuracy did not improve from 0.01566 256/256 [==============================] - 69s 268ms/step - loss: 0.5164 - accuracy: 0.9002 - val_loss: 22.7191 - val_accuracy: 0.0147 Epoch 9/30 256/256 [==============================] - ETA: 0s - loss: 0.3845 - accuracy: 0.9345 Epoch 00009: val_accuracy did not improve from 0.01566 256/256 [==============================] - 68s 266ms/step - loss: 0.3845 - accuracy: 0.9345 - val_loss: 21.2843 - val_accuracy: 0.0152 Epoch 10/30 256/256 [==============================] - ETA: 0s - loss: 0.3045 - accuracy: 0.9403 Epoch 00010: val_accuracy did not improve from 0.01566 256/256 [==============================] - 68s 265ms/step - loss: 0.3045 - accuracy: 0.9403 - val_loss: 29.4258 - val_accuracy: 0.0157 Epoch 11/30 256/256 [==============================] - ETA: 0s - loss: 0.2503 - accuracy: 0.9583 Epoch 00011: val_accuracy improved from 0.01566 to 0.01908, saving model to /content/drive/MyDrive/dog breed/Models/Model_5.h5 256/256 [==============================] - 71s 277ms/step - loss: 0.2503 - accuracy: 0.9583 - val_loss: 25.5217 - val_accuracy: 0.0191 Epoch 12/30 256/256 [==============================] - ETA: 0s - loss: 0.2004 - accuracy: 0.9663 Epoch 00012: val_accuracy did not improve from 0.01908 256/256 [==============================] - 69s 270ms/step - loss: 0.2004 - accuracy: 0.9663 - val_loss: 34.5895 - val_accuracy: 0.0157 Epoch 13/30 256/256 [==============================] - ETA: 0s - loss: 0.1866 - accuracy: 0.9685 Epoch 00013: val_accuracy did not improve from 0.01908 256/256 [==============================] - 68s 264ms/step - loss: 0.1866 - accuracy: 0.9685 - val_loss: 36.0472 - val_accuracy: 0.0152 Epoch 14/30 256/256 [==============================] - ETA: 0s - loss: 0.1361 - accuracy: 0.9779 Epoch 00014: val_accuracy did not improve from 0.01908 256/256 [==============================] - 68s 266ms/step - loss: 0.1361 - accuracy: 0.9779 - val_loss: 34.6769 - val_accuracy: 0.0157 Epoch 15/30 256/256 [==============================] - ETA: 0s - loss: 0.1534 - accuracy: 0.9775 Epoch 00015: val_accuracy improved from 0.01908 to 0.01957, saving model to /content/drive/MyDrive/dog breed/Models/Model_5.h5 256/256 [==============================] - 70s 275ms/step - loss: 0.1534 - accuracy: 0.9775 - val_loss: 33.4882 - val_accuracy: 0.0196 Epoch 16/30 256/256 [==============================] - ETA: 0s - loss: 0.1350 - accuracy: 0.9809 Epoch 00016: val_accuracy did not improve from 0.01957 256/256 [==============================] - 68s 266ms/step - loss: 0.1350 - accuracy: 0.9809 - val_loss: 37.3512 - val_accuracy: 0.0147 Epoch 17/30 256/256 [==============================] - ETA: 0s - loss: 0.1680 - accuracy: 0.9826 Epoch 00017: val_accuracy did not improve from 0.01957 256/256 [==============================] - 66s 259ms/step - loss: 0.1680 - accuracy: 0.9826 - val_loss: 33.4076 - val_accuracy: 0.0186 Epoch 18/30 256/256 [==============================] - ETA: 0s - loss: 0.1601 - accuracy: 0.9770 Epoch 00018: val_accuracy did not improve from 0.01957 256/256 [==============================] - 67s 261ms/step - loss: 0.1601 - accuracy: 0.9770 - val_loss: 35.3413 - val_accuracy: 0.0171 Epoch 19/30 256/256 [==============================] - ETA: 0s - loss: 0.1149 - accuracy: 0.9842 Epoch 00019: val_accuracy improved from 0.01957 to 0.02055, saving model to /content/drive/MyDrive/dog breed/Models/Model_5.h5 256/256 [==============================] - 70s 272ms/step - loss: 0.1149 - accuracy: 0.9842 - val_loss: 34.6898 - val_accuracy: 0.0205 Epoch 20/30 256/256 [==============================] - ETA: 0s - loss: 0.1321 - accuracy: 0.9820 Epoch 00020: val_accuracy did not improve from 0.02055 256/256 [==============================] - 67s 261ms/step - loss: 0.1321 - accuracy: 0.9820 - val_loss: 39.2349 - val_accuracy: 0.0181 Epoch 21/30 256/256 [==============================] - ETA: 0s - loss: 0.1087 - accuracy: 0.9850 Epoch 00021: val_accuracy did not improve from 0.02055 256/256 [==============================] - 66s 257ms/step - loss: 0.1087 - accuracy: 0.9850 - val_loss: 39.7772 - val_accuracy: 0.0201 Epoch 22/30 256/256 [==============================] - ETA: 0s - loss: 0.1394 - accuracy: 0.9880 Epoch 00022: val_accuracy did not improve from 0.02055 256/256 [==============================] - 66s 258ms/step - loss: 0.1394 - accuracy: 0.9880 - val_loss: 36.5143 - val_accuracy: 0.0201 Epoch 23/30 256/256 [==============================] - ETA: 0s - loss: 0.0878 - accuracy: 0.9870 Epoch 00023: val_accuracy did not improve from 0.02055 256/256 [==============================] - 67s 261ms/step - loss: 0.0878 - accuracy: 0.9870 - val_loss: 39.5402 - val_accuracy: 0.0186 Epoch 24/30 256/256 [==============================] - ETA: 0s - loss: 0.0985 - accuracy: 0.9885 Epoch 00024: val_accuracy did not improve from 0.02055 256/256 [==============================] - 67s 261ms/step - loss: 0.0985 - accuracy: 0.9885 - val_loss: 30.4035 - val_accuracy: 0.0196 Epoch 25/30 256/256 [==============================] - ETA: 0s - loss: 0.0875 - accuracy: 0.9885 Epoch 00025: val_accuracy did not improve from 0.02055 256/256 [==============================] - 67s 264ms/step - loss: 0.0875 - accuracy: 0.9885 - val_loss: 44.7807 - val_accuracy: 0.0186 Epoch 26/30 256/256 [==============================] - ETA: 0s - loss: 0.0859 - accuracy: 0.9907 Epoch 00026: val_accuracy did not improve from 0.02055 256/256 [==============================] - 68s 264ms/step - loss: 0.0859 - accuracy: 0.9907 - val_loss: 44.1006 - val_accuracy: 0.0176 Epoch 27/30 256/256 [==============================] - ETA: 0s - loss: 0.0893 - accuracy: 0.9883 Epoch 00027: val_accuracy improved from 0.02055 to 0.02104, saving model to /content/drive/MyDrive/dog breed/Models/Model_5.h5 256/256 [==============================] - 71s 279ms/step - loss: 0.0893 - accuracy: 0.9883 - val_loss: 41.5699 - val_accuracy: 0.0210 Epoch 28/30 256/256 [==============================] - ETA: 0s - loss: 0.0709 - accuracy: 0.9901 Epoch 00028: val_accuracy did not improve from 0.02104 256/256 [==============================] - 68s 267ms/step - loss: 0.0709 - accuracy: 0.9901 - val_loss: 43.6108 - val_accuracy: 0.0201 Epoch 29/30 256/256 [==============================] - ETA: 0s - loss: 0.0844 - accuracy: 0.9897 Epoch 00029: val_accuracy did not improve from 0.02104 256/256 [==============================] - 67s 261ms/step - loss: 0.0844 - accuracy: 0.9897 - val_loss: 44.5184 - val_accuracy: 0.0205 Epoch 30/30 256/256 [==============================] - ETA: 0s - loss: 0.0944 - accuracy: 0.9874 Epoch 00030: val_accuracy did not improve from 0.02104 256/256 [==============================] - 67s 261ms/step - loss: 0.0944 - accuracy: 0.9874 - val_loss: 37.6750 - val_accuracy: 0.0205
<Figure size 432x288 with 0 Axes>
64/64 [==============================] - 12s 190ms/step - loss: 41.5699 - accuracy: 0.0210 0
<Figure size 432x288 with 0 Axes>
By the result from the first model we see that the accuracy and loss is low and high respectively , and we have overfitting.
As we learned in the lectures there are several ways to improve the model of a neural network:
we learned that we can use convolution layers, add Batch normalizing layers to optimize learning time and reach convergence faster, and in addition we can use drop out and pooling layers that are not learning layers and are not add parameters to the model.
We decided to improve the model by adding convolution layers containing number of filters in ascending order - this will allow us to create more depth to the image - create more features for the Flatten layer and reduce the image dimensions.
we also add a Batch normalization layer at the end of the convolutaion calculate - this layer will improve the learning speed of the model and make the process more stable.
To get better generalize of our model, we decided to increase the drop-out value in order to randomly remove more values and generalize the image, and reduce the dimensions of the input images to 300x300.
def get_Second_Model():
second_model = Sequential()
second_model.add(Conv2D(32, (3, 3),activation='relu' , input_shape=(300, 300, 3)))
second_model.add(MaxPool2D(pool_size=(2, 2)))
second_model.add(Dropout(0.5))
second_model.add(Conv2D(64, (3, 3),activation='relu'))
second_model.add(MaxPool2D(pool_size=(2, 2)))
second_model.add(Dropout(0.5))
second_model.add(Conv2D(128, (3, 3),activation='relu' ))
second_model.add(MaxPool2D(pool_size=(2, 2)))
second_model.add(Dropout(0.5))
second_model.add(Conv2D(512, (3, 3),activation='relu' ))
second_model.add(MaxPool2D(pool_size=(2, 2)))
second_model.add(Dropout(0.5))
second_model.add(BatchNormalization())
second_model.add(Dropout(0.5))
second_model.add(Flatten())
second_model.add(Dense(120,activation='softmax'))
return second_model
second_model= fit_model_5Fold(2,(300,300))
Start FOLD Number 1 Found 8177 validated image filenames belonging to 120 classes. Found 2045 validated image filenames belonging to 120 classes. Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 298, 298, 32) 896 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 149, 149, 32) 0 _________________________________________________________________ dropout (Dropout) (None, 149, 149, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 147, 147, 64) 18496 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 73, 73, 64) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 73, 73, 64) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 71, 71, 128) 73856 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 35, 35, 128) 0 _________________________________________________________________ dropout_2 (Dropout) (None, 35, 35, 128) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 33, 33, 512) 590336 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 16, 16, 512) 0 _________________________________________________________________ dropout_3 (Dropout) (None, 16, 16, 512) 0 _________________________________________________________________ batch_normalization (BatchNo (None, 16, 16, 512) 2048 _________________________________________________________________ dropout_4 (Dropout) (None, 16, 16, 512) 0 _________________________________________________________________ flatten (Flatten) (None, 131072) 0 _________________________________________________________________ dense (Dense) (None, 120) 15728760 ================================================================= Total params: 16,414,392 Trainable params: 16,413,368 Non-trainable params: 1,024 _________________________________________________________________ Epoch 1/30 256/256 [==============================] - ETA: 0s - loss: 9.7437 - accuracy: 0.0148 Epoch 00001: val_accuracy improved from -inf to 0.00880, saving model to /content/drive/MyDrive/dog breed/Models/Model_1.h5 256/256 [==============================] - 61s 236ms/step - loss: 9.7437 - accuracy: 0.0148 - val_loss: 10.4083 - val_accuracy: 0.0088 Epoch 2/30 256/256 [==============================] - ETA: 0s - loss: 8.8694 - accuracy: 0.0190 Epoch 00002: val_accuracy did not improve from 0.00880 256/256 [==============================] - 59s 232ms/step - loss: 8.8694 - accuracy: 0.0190 - val_loss: 7.3787 - val_accuracy: 0.0083 Epoch 3/30 256/256 [==============================] - ETA: 0s - loss: 6.8412 - accuracy: 0.0279 Epoch 00003: val_accuracy improved from 0.00880 to 0.01222, saving model to /content/drive/MyDrive/dog breed/Models/Model_1.h5 256/256 [==============================] - 60s 234ms/step - loss: 6.8412 - accuracy: 0.0279 - val_loss: 6.2524 - val_accuracy: 0.0122 Epoch 4/30 256/256 [==============================] - ETA: 0s - loss: 5.7091 - accuracy: 0.0352 Epoch 00004: val_accuracy improved from 0.01222 to 0.01760, saving model to /content/drive/MyDrive/dog breed/Models/Model_1.h5 256/256 [==============================] - 60s 234ms/step - loss: 5.7091 - accuracy: 0.0352 - val_loss: 5.0666 - val_accuracy: 0.0176 Epoch 5/30 256/256 [==============================] - ETA: 0s - loss: 4.6331 - accuracy: 0.0642 Epoch 00005: val_accuracy improved from 0.01760 to 0.02005, saving model to /content/drive/MyDrive/dog breed/Models/Model_1.h5 256/256 [==============================] - 60s 232ms/step - loss: 4.6331 - accuracy: 0.0642 - val_loss: 4.9707 - val_accuracy: 0.0200 Epoch 6/30 256/256 [==============================] - ETA: 0s - loss: 4.1343 - accuracy: 0.0942 Epoch 00006: val_accuracy improved from 0.02005 to 0.02249, saving model to /content/drive/MyDrive/dog breed/Models/Model_1.h5 256/256 [==============================] - 59s 232ms/step - loss: 4.1343 - accuracy: 0.0942 - val_loss: 4.8550 - val_accuracy: 0.0225 Epoch 7/30 256/256 [==============================] - ETA: 0s - loss: 3.8775 - accuracy: 0.1295 Epoch 00007: val_accuracy did not improve from 0.02249 256/256 [==============================] - 59s 230ms/step - loss: 3.8775 - accuracy: 0.1295 - val_loss: 4.9219 - val_accuracy: 0.0220 Epoch 8/30 256/256 [==============================] - ETA: 0s - loss: 3.6937 - accuracy: 0.1534 Epoch 00008: val_accuracy improved from 0.02249 to 0.02885, saving model to /content/drive/MyDrive/dog breed/Models/Model_1.h5 256/256 [==============================] - 60s 235ms/step - loss: 3.6937 - accuracy: 0.1534 - val_loss: 4.9204 - val_accuracy: 0.0289 Epoch 9/30 256/256 [==============================] - ETA: 0s - loss: 3.4619 - accuracy: 0.1995 Epoch 00009: val_accuracy improved from 0.02885 to 0.02983, saving model to /content/drive/MyDrive/dog breed/Models/Model_1.h5 256/256 [==============================] - 60s 234ms/step - loss: 3.4619 - accuracy: 0.1995 - val_loss: 4.9185 - val_accuracy: 0.0298 Epoch 10/30 256/256 [==============================] - ETA: 0s - loss: 3.2702 - accuracy: 0.2269 Epoch 00010: val_accuracy improved from 0.02983 to 0.03081, saving model to /content/drive/MyDrive/dog breed/Models/Model_1.h5 256/256 [==============================] - 60s 234ms/step - loss: 3.2702 - accuracy: 0.2269 - val_loss: 5.0132 - val_accuracy: 0.0308 Epoch 11/30 256/256 [==============================] - ETA: 0s - loss: 3.0942 - accuracy: 0.2585 Epoch 00011: val_accuracy improved from 0.03081 to 0.03667, saving model to /content/drive/MyDrive/dog breed/Models/Model_1.h5 256/256 [==============================] - 60s 233ms/step - loss: 3.0942 - accuracy: 0.2585 - val_loss: 4.9110 - val_accuracy: 0.0367 Epoch 12/30 256/256 [==============================] - ETA: 0s - loss: 2.9075 - accuracy: 0.2919 Epoch 00012: val_accuracy improved from 0.03667 to 0.03765, saving model to /content/drive/MyDrive/dog breed/Models/Model_1.h5 256/256 [==============================] - 60s 233ms/step - loss: 2.9075 - accuracy: 0.2919 - val_loss: 4.9144 - val_accuracy: 0.0377 Epoch 13/30 256/256 [==============================] - ETA: 0s - loss: 2.7188 - accuracy: 0.3304 Epoch 00013: val_accuracy did not improve from 0.03765 256/256 [==============================] - 59s 231ms/step - loss: 2.7188 - accuracy: 0.3304 - val_loss: 10.8573 - val_accuracy: 0.0181 Epoch 14/30 256/256 [==============================] - ETA: 0s - loss: 2.6685 - accuracy: 0.3418 Epoch 00014: val_accuracy improved from 0.03765 to 0.04156, saving model to /content/drive/MyDrive/dog breed/Models/Model_1.h5 256/256 [==============================] - 59s 231ms/step - loss: 2.6685 - accuracy: 0.3418 - val_loss: 4.9190 - val_accuracy: 0.0416 Epoch 15/30 256/256 [==============================] - ETA: 0s - loss: 2.4021 - accuracy: 0.3933 Epoch 00015: val_accuracy did not improve from 0.04156 256/256 [==============================] - 59s 229ms/step - loss: 2.4021 - accuracy: 0.3933 - val_loss: 5.0996 - val_accuracy: 0.0284 Epoch 16/30 256/256 [==============================] - ETA: 0s - loss: 2.4348 - accuracy: 0.3911 Epoch 00016: val_accuracy did not improve from 0.04156 256/256 [==============================] - 58s 228ms/step - loss: 2.4348 - accuracy: 0.3911 - val_loss: 5.4055 - val_accuracy: 0.0342 Epoch 17/30 256/256 [==============================] - ETA: 0s - loss: 2.4926 - accuracy: 0.3869 Epoch 00017: val_accuracy did not improve from 0.04156 256/256 [==============================] - 58s 228ms/step - loss: 2.4926 - accuracy: 0.3869 - val_loss: 5.3487 - val_accuracy: 0.0313 Epoch 18/30 256/256 [==============================] - ETA: 0s - loss: 2.0419 - accuracy: 0.4763 Epoch 00018: val_accuracy did not improve from 0.04156 256/256 [==============================] - 59s 231ms/step - loss: 2.0419 - accuracy: 0.4763 - val_loss: 6.0241 - val_accuracy: 0.0191 Epoch 19/30 256/256 [==============================] - ETA: 0s - loss: 1.9591 - accuracy: 0.4953 Epoch 00019: val_accuracy did not improve from 0.04156 256/256 [==============================] - 58s 228ms/step - loss: 1.9591 - accuracy: 0.4953 - val_loss: 5.3273 - val_accuracy: 0.0391 Epoch 20/30 256/256 [==============================] - ETA: 0s - loss: 1.8184 - accuracy: 0.5218 Epoch 00020: val_accuracy did not improve from 0.04156 256/256 [==============================] - 59s 229ms/step - loss: 1.8184 - accuracy: 0.5218 - val_loss: 5.4157 - val_accuracy: 0.0333 Epoch 21/30 256/256 [==============================] - ETA: 0s - loss: 1.6227 - accuracy: 0.5627 Epoch 00021: val_accuracy did not improve from 0.04156 256/256 [==============================] - 58s 228ms/step - loss: 1.6227 - accuracy: 0.5627 - val_loss: 5.3981 - val_accuracy: 0.0357 Epoch 22/30 256/256 [==============================] - ETA: 0s - loss: 1.6693 - accuracy: 0.5660 Epoch 00022: val_accuracy did not improve from 0.04156 256/256 [==============================] - 58s 229ms/step - loss: 1.6693 - accuracy: 0.5660 - val_loss: 5.5069 - val_accuracy: 0.0313 Epoch 23/30 256/256 [==============================] - ETA: 0s - loss: 1.4166 - accuracy: 0.6194 Epoch 00023: val_accuracy did not improve from 0.04156 256/256 [==============================] - 59s 232ms/step - loss: 1.4166 - accuracy: 0.6194 - val_loss: 5.4532 - val_accuracy: 0.0372 Epoch 24/30 256/256 [==============================] - ETA: 0s - loss: 1.3467 - accuracy: 0.6269 Epoch 00024: val_accuracy did not improve from 0.04156 256/256 [==============================] - 58s 227ms/step - loss: 1.3467 - accuracy: 0.6269 - val_loss: 5.5367 - val_accuracy: 0.0293
64/64 [==============================] - 10s 159ms/step - loss: 4.9190 - accuracy: 0.0416 Start FOLD Number 2 Found 8177 validated image filenames belonging to 120 classes. Found 2045 validated image filenames belonging to 120 classes. Epoch 1/30 256/256 [==============================] - ETA: 0s - loss: 10.0974 - accuracy: 0.0149 Epoch 00001: val_accuracy improved from -inf to 0.00978, saving model to /content/drive/MyDrive/dog breed/Models/Model_2.h5 256/256 [==============================] - 59s 232ms/step - loss: 10.0974 - accuracy: 0.0149 - val_loss: 10.4829 - val_accuracy: 0.0098 Epoch 2/30 256/256 [==============================] - ETA: 0s - loss: 9.5772 - accuracy: 0.0172 Epoch 00002: val_accuracy improved from 0.00978 to 0.01320, saving model to /content/drive/MyDrive/dog breed/Models/Model_2.h5 256/256 [==============================] - 60s 234ms/step - loss: 9.5772 - accuracy: 0.0172 - val_loss: 7.5242 - val_accuracy: 0.0132 Epoch 3/30 256/256 [==============================] - ETA: 0s - loss: 8.0478 - accuracy: 0.0225 Epoch 00003: val_accuracy improved from 0.01320 to 0.01663, saving model to /content/drive/MyDrive/dog breed/Models/Model_2.h5 256/256 [==============================] - 60s 234ms/step - loss: 8.0478 - accuracy: 0.0225 - val_loss: 6.2264 - val_accuracy: 0.0166 Epoch 4/30 256/256 [==============================] - ETA: 0s - loss: 6.0077 - accuracy: 0.0333 Epoch 00004: val_accuracy did not improve from 0.01663 256/256 [==============================] - 59s 232ms/step - loss: 6.0077 - accuracy: 0.0333 - val_loss: 5.3122 - val_accuracy: 0.0152 Epoch 5/30 256/256 [==============================] - ETA: 0s - loss: 4.8001 - accuracy: 0.0539 Epoch 00005: val_accuracy improved from 0.01663 to 0.02005, saving model to /content/drive/MyDrive/dog breed/Models/Model_2.h5 256/256 [==============================] - 59s 231ms/step - loss: 4.8001 - accuracy: 0.0539 - val_loss: 4.8931 - val_accuracy: 0.0200 Epoch 6/30 256/256 [==============================] - ETA: 0s - loss: 4.3239 - accuracy: 0.0756 Epoch 00006: val_accuracy improved from 0.02005 to 0.02787, saving model to /content/drive/MyDrive/dog breed/Models/Model_2.h5 256/256 [==============================] - 60s 233ms/step - loss: 4.3239 - accuracy: 0.0756 - val_loss: 4.7030 - val_accuracy: 0.0279 Epoch 7/30 256/256 [==============================] - ETA: 0s - loss: 4.0294 - accuracy: 0.1068 Epoch 00007: val_accuracy improved from 0.02787 to 0.03032, saving model to /content/drive/MyDrive/dog breed/Models/Model_2.h5 256/256 [==============================] - 60s 234ms/step - loss: 4.0294 - accuracy: 0.1068 - val_loss: 4.6588 - val_accuracy: 0.0303 Epoch 8/30 256/256 [==============================] - ETA: 0s - loss: 3.8674 - accuracy: 0.1300 Epoch 00008: val_accuracy improved from 0.03032 to 0.03765, saving model to /content/drive/MyDrive/dog breed/Models/Model_2.h5 256/256 [==============================] - 60s 235ms/step - loss: 3.8674 - accuracy: 0.1300 - val_loss: 4.6586 - val_accuracy: 0.0377 Epoch 9/30 256/256 [==============================] - ETA: 0s - loss: 3.6685 - accuracy: 0.1639 Epoch 00009: val_accuracy improved from 0.03765 to 0.03912, saving model to /content/drive/MyDrive/dog breed/Models/Model_2.h5 256/256 [==============================] - 61s 237ms/step - loss: 3.6685 - accuracy: 0.1639 - val_loss: 4.6623 - val_accuracy: 0.0391 Epoch 10/30 256/256 [==============================] - ETA: 0s - loss: 3.6860 - accuracy: 0.1685 Epoch 00010: val_accuracy did not improve from 0.03912 256/256 [==============================] - 59s 232ms/step - loss: 3.6860 - accuracy: 0.1685 - val_loss: 4.9673 - val_accuracy: 0.0274 Epoch 11/30 256/256 [==============================] - ETA: 0s - loss: 3.4484 - accuracy: 0.1981 Epoch 00011: val_accuracy did not improve from 0.03912 256/256 [==============================] - 59s 229ms/step - loss: 3.4484 - accuracy: 0.1981 - val_loss: 4.7418 - val_accuracy: 0.0298 Epoch 12/30 256/256 [==============================] - ETA: 0s - loss: 3.2344 - accuracy: 0.2303 Epoch 00012: val_accuracy improved from 0.03912 to 0.04010, saving model to /content/drive/MyDrive/dog breed/Models/Model_2.h5 256/256 [==============================] - 59s 232ms/step - loss: 3.2344 - accuracy: 0.2303 - val_loss: 4.7651 - val_accuracy: 0.0401 Epoch 13/30 256/256 [==============================] - ETA: 0s - loss: 3.0752 - accuracy: 0.2642 Epoch 00013: val_accuracy improved from 0.04010 to 0.04156, saving model to /content/drive/MyDrive/dog breed/Models/Model_2.h5 256/256 [==============================] - 59s 232ms/step - loss: 3.0752 - accuracy: 0.2642 - val_loss: 4.8123 - val_accuracy: 0.0416 Epoch 14/30 256/256 [==============================] - ETA: 0s - loss: 3.0128 - accuracy: 0.2790 Epoch 00014: val_accuracy did not improve from 0.04156 256/256 [==============================] - 59s 230ms/step - loss: 3.0128 - accuracy: 0.2790 - val_loss: 5.2143 - val_accuracy: 0.0279 Epoch 15/30 256/256 [==============================] - ETA: 0s - loss: 2.7994 - accuracy: 0.3141 Epoch 00015: val_accuracy did not improve from 0.04156 256/256 [==============================] - 59s 229ms/step - loss: 2.7994 - accuracy: 0.3141 - val_loss: 5.0383 - val_accuracy: 0.0367 Epoch 16/30 256/256 [==============================] - ETA: 0s - loss: 2.5935 - accuracy: 0.3600 Epoch 00016: val_accuracy did not improve from 0.04156 256/256 [==============================] - 58s 228ms/step - loss: 2.5935 - accuracy: 0.3600 - val_loss: 5.2596 - val_accuracy: 0.0274 Epoch 17/30 256/256 [==============================] - ETA: 0s - loss: 2.4886 - accuracy: 0.3798 Epoch 00017: val_accuracy did not improve from 0.04156 256/256 [==============================] - 58s 228ms/step - loss: 2.4886 - accuracy: 0.3798 - val_loss: 5.1692 - val_accuracy: 0.0289 Epoch 18/30 256/256 [==============================] - ETA: 0s - loss: 2.4737 - accuracy: 0.3769 Epoch 00018: val_accuracy did not improve from 0.04156 256/256 [==============================] - 59s 229ms/step - loss: 2.4737 - accuracy: 0.3769 - val_loss: 6.2410 - val_accuracy: 0.0323 Epoch 19/30 256/256 [==============================] - ETA: 0s - loss: 2.2210 - accuracy: 0.4346 Epoch 00019: val_accuracy did not improve from 0.04156 256/256 [==============================] - 58s 227ms/step - loss: 2.2210 - accuracy: 0.4346 - val_loss: 5.1005 - val_accuracy: 0.0377 Epoch 20/30 256/256 [==============================] - ETA: 0s - loss: 2.0536 - accuracy: 0.4659 Epoch 00020: val_accuracy did not improve from 0.04156 256/256 [==============================] - 59s 229ms/step - loss: 2.0536 - accuracy: 0.4659 - val_loss: 5.1671 - val_accuracy: 0.0323 Epoch 21/30 256/256 [==============================] - ETA: 0s - loss: 1.9833 - accuracy: 0.4928 Epoch 00021: val_accuracy did not improve from 0.04156 256/256 [==============================] - 58s 227ms/step - loss: 1.9833 - accuracy: 0.4928 - val_loss: 5.2013 - val_accuracy: 0.0357 Epoch 22/30 256/256 [==============================] - ETA: 0s - loss: 1.8102 - accuracy: 0.5290 Epoch 00022: val_accuracy did not improve from 0.04156 256/256 [==============================] - 58s 227ms/step - loss: 1.8102 - accuracy: 0.5290 - val_loss: 5.4345 - val_accuracy: 0.0362 Epoch 23/30 256/256 [==============================] - ETA: 0s - loss: 1.6183 - accuracy: 0.5714 Epoch 00023: val_accuracy did not improve from 0.04156 256/256 [==============================] - 58s 228ms/step - loss: 1.6183 - accuracy: 0.5714 - val_loss: 5.2471 - val_accuracy: 0.0328
<Figure size 432x288 with 0 Axes>
64/64 [==============================] - 10s 159ms/step - loss: 4.8123 - accuracy: 0.0416 Start FOLD Number 3 Found 8178 validated image filenames belonging to 120 classes. Found 2044 validated image filenames belonging to 120 classes. Epoch 1/30 256/256 [==============================] - ETA: 0s - loss: 9.7583 - accuracy: 0.0149 Epoch 00001: val_accuracy improved from -inf to 0.00489, saving model to /content/drive/MyDrive/dog breed/Models/Model_3.h5 256/256 [==============================] - 63s 245ms/step - loss: 9.7583 - accuracy: 0.0149 - val_loss: 9.4113 - val_accuracy: 0.0049 Epoch 2/30 256/256 [==============================] - ETA: 0s - loss: 9.0302 - accuracy: 0.0181 Epoch 00002: val_accuracy improved from 0.00489 to 0.00636, saving model to /content/drive/MyDrive/dog breed/Models/Model_3.h5 256/256 [==============================] - 60s 236ms/step - loss: 9.0302 - accuracy: 0.0181 - val_loss: 7.9017 - val_accuracy: 0.0064 Epoch 3/30 256/256 [==============================] - ETA: 0s - loss: 7.5495 - accuracy: 0.0232 Epoch 00003: val_accuracy improved from 0.00636 to 0.01125, saving model to /content/drive/MyDrive/dog breed/Models/Model_3.h5 256/256 [==============================] - 60s 233ms/step - loss: 7.5495 - accuracy: 0.0232 - val_loss: 5.6548 - val_accuracy: 0.0113 Epoch 4/30 256/256 [==============================] - ETA: 0s - loss: 6.0472 - accuracy: 0.0358 Epoch 00004: val_accuracy improved from 0.01125 to 0.01859, saving model to /content/drive/MyDrive/dog breed/Models/Model_3.h5 256/256 [==============================] - 60s 234ms/step - loss: 6.0472 - accuracy: 0.0358 - val_loss: 5.1672 - val_accuracy: 0.0186 Epoch 5/30 256/256 [==============================] - ETA: 0s - loss: 4.8229 - accuracy: 0.0571 Epoch 00005: val_accuracy improved from 0.01859 to 0.02104, saving model to /content/drive/MyDrive/dog breed/Models/Model_3.h5 256/256 [==============================] - 60s 234ms/step - loss: 4.8229 - accuracy: 0.0571 - val_loss: 4.9991 - val_accuracy: 0.0210 Epoch 6/30 256/256 [==============================] - ETA: 0s - loss: 4.2625 - accuracy: 0.0883 Epoch 00006: val_accuracy improved from 0.02104 to 0.02446, saving model to /content/drive/MyDrive/dog breed/Models/Model_3.h5 256/256 [==============================] - 60s 234ms/step - loss: 4.2625 - accuracy: 0.0883 - val_loss: 5.0050 - val_accuracy: 0.0245 Epoch 7/30 256/256 [==============================] - ETA: 0s - loss: 3.9924 - accuracy: 0.1180 Epoch 00007: val_accuracy improved from 0.02446 to 0.02789, saving model to /content/drive/MyDrive/dog breed/Models/Model_3.h5 256/256 [==============================] - 61s 236ms/step - loss: 3.9924 - accuracy: 0.1180 - val_loss: 4.8009 - val_accuracy: 0.0279 Epoch 8/30 256/256 [==============================] - ETA: 0s - loss: 3.7159 - accuracy: 0.1565 Epoch 00008: val_accuracy did not improve from 0.02789 256/256 [==============================] - 59s 230ms/step - loss: 3.7159 - accuracy: 0.1565 - val_loss: 4.7944 - val_accuracy: 0.0259 Epoch 9/30 256/256 [==============================] - ETA: 0s - loss: 3.5085 - accuracy: 0.1906 Epoch 00009: val_accuracy improved from 0.02789 to 0.03033, saving model to /content/drive/MyDrive/dog breed/Models/Model_3.h5 256/256 [==============================] - 59s 232ms/step - loss: 3.5085 - accuracy: 0.1906 - val_loss: 4.8020 - val_accuracy: 0.0303 Epoch 10/30 256/256 [==============================] - ETA: 0s - loss: 3.3451 - accuracy: 0.2142 Epoch 00010: val_accuracy improved from 0.03033 to 0.04159, saving model to /content/drive/MyDrive/dog breed/Models/Model_3.h5 256/256 [==============================] - 60s 233ms/step - loss: 3.3451 - accuracy: 0.2142 - val_loss: 4.7341 - val_accuracy: 0.0416 Epoch 11/30 256/256 [==============================] - ETA: 0s - loss: 3.2212 - accuracy: 0.2366 Epoch 00011: val_accuracy did not improve from 0.04159 256/256 [==============================] - 59s 229ms/step - loss: 3.2212 - accuracy: 0.2366 - val_loss: 4.8609 - val_accuracy: 0.0352 Epoch 12/30 256/256 [==============================] - ETA: 0s - loss: 3.0795 - accuracy: 0.2636 Epoch 00012: val_accuracy did not improve from 0.04159 256/256 [==============================] - 59s 231ms/step - loss: 3.0795 - accuracy: 0.2636 - val_loss: 4.8137 - val_accuracy: 0.0347 Epoch 13/30 256/256 [==============================] - ETA: 0s - loss: 3.0559 - accuracy: 0.2801 Epoch 00013: val_accuracy did not improve from 0.04159 256/256 [==============================] - 59s 229ms/step - loss: 3.0559 - accuracy: 0.2801 - val_loss: 4.9169 - val_accuracy: 0.0347 Epoch 14/30 256/256 [==============================] - ETA: 0s - loss: 2.8894 - accuracy: 0.3039 Epoch 00014: val_accuracy did not improve from 0.04159 256/256 [==============================] - 58s 228ms/step - loss: 2.8894 - accuracy: 0.3039 - val_loss: 5.1068 - val_accuracy: 0.0347 Epoch 15/30 256/256 [==============================] - ETA: 0s - loss: 2.6409 - accuracy: 0.3491 Epoch 00015: val_accuracy did not improve from 0.04159 256/256 [==============================] - 59s 230ms/step - loss: 2.6409 - accuracy: 0.3491 - val_loss: 5.0445 - val_accuracy: 0.0357 Epoch 16/30 256/256 [==============================] - ETA: 0s - loss: 2.5514 - accuracy: 0.3736 Epoch 00016: val_accuracy did not improve from 0.04159 256/256 [==============================] - 59s 231ms/step - loss: 2.5514 - accuracy: 0.3736 - val_loss: 5.1395 - val_accuracy: 0.0342 Epoch 17/30 256/256 [==============================] - ETA: 0s - loss: 2.2862 - accuracy: 0.4247 Epoch 00017: val_accuracy did not improve from 0.04159 256/256 [==============================] - 59s 232ms/step - loss: 2.2862 - accuracy: 0.4247 - val_loss: 5.0637 - val_accuracy: 0.0406 Epoch 18/30 256/256 [==============================] - ETA: 0s - loss: 2.2542 - accuracy: 0.4342 Epoch 00018: val_accuracy did not improve from 0.04159 256/256 [==============================] - 60s 233ms/step - loss: 2.2542 - accuracy: 0.4342 - val_loss: 5.3024 - val_accuracy: 0.0391 Epoch 19/30 256/256 [==============================] - ETA: 0s - loss: 2.1900 - accuracy: 0.4453 Epoch 00019: val_accuracy did not improve from 0.04159 256/256 [==============================] - 59s 232ms/step - loss: 2.1900 - accuracy: 0.4453 - val_loss: 5.1377 - val_accuracy: 0.0284 Epoch 20/30 256/256 [==============================] - ETA: 0s - loss: 1.9946 - accuracy: 0.4918 Epoch 00020: val_accuracy did not improve from 0.04159 256/256 [==============================] - 59s 231ms/step - loss: 1.9946 - accuracy: 0.4918 - val_loss: 5.3474 - val_accuracy: 0.0377
<Figure size 432x288 with 0 Axes>
64/64 [==============================] - 10s 162ms/step - loss: 4.7341 - accuracy: 0.0416 Start FOLD Number 4 Found 8178 validated image filenames belonging to 120 classes. Found 2044 validated image filenames belonging to 120 classes. Epoch 1/30 256/256 [==============================] - ETA: 0s - loss: 9.3396 - accuracy: 0.0139 Epoch 00001: val_accuracy improved from -inf to 0.00783, saving model to /content/drive/MyDrive/dog breed/Models/Model_4.h5 256/256 [==============================] - 60s 235ms/step - loss: 9.3396 - accuracy: 0.0139 - val_loss: 9.0847 - val_accuracy: 0.0078 Epoch 2/30 256/256 [==============================] - ETA: 0s - loss: 8.2806 - accuracy: 0.0201 Epoch 00002: val_accuracy improved from 0.00783 to 0.01125, saving model to /content/drive/MyDrive/dog breed/Models/Model_4.h5 256/256 [==============================] - 60s 236ms/step - loss: 8.2806 - accuracy: 0.0201 - val_loss: 7.0260 - val_accuracy: 0.0113 Epoch 3/30 256/256 [==============================] - ETA: 0s - loss: 6.7029 - accuracy: 0.0258 Epoch 00003: val_accuracy did not improve from 0.01125 256/256 [==============================] - 59s 231ms/step - loss: 6.7029 - accuracy: 0.0258 - val_loss: 5.7066 - val_accuracy: 0.0078 Epoch 4/30 256/256 [==============================] - ETA: 0s - loss: 5.2156 - accuracy: 0.0421 Epoch 00004: val_accuracy did not improve from 0.01125 256/256 [==============================] - 58s 228ms/step - loss: 5.2156 - accuracy: 0.0421 - val_loss: 5.1827 - val_accuracy: 0.0088 Epoch 5/30 256/256 [==============================] - ETA: 0s - loss: 4.5391 - accuracy: 0.0644 Epoch 00005: val_accuracy improved from 0.01125 to 0.02397, saving model to /content/drive/MyDrive/dog breed/Models/Model_4.h5 256/256 [==============================] - 59s 230ms/step - loss: 4.5391 - accuracy: 0.0644 - val_loss: 4.8293 - val_accuracy: 0.0240 Epoch 6/30 256/256 [==============================] - ETA: 0s - loss: 4.4934 - accuracy: 0.0659 Epoch 00006: val_accuracy improved from 0.02397 to 0.02593, saving model to /content/drive/MyDrive/dog breed/Models/Model_4.h5 256/256 [==============================] - 59s 232ms/step - loss: 4.4934 - accuracy: 0.0659 - val_loss: 4.8378 - val_accuracy: 0.0259 Epoch 7/30 256/256 [==============================] - ETA: 0s - loss: 4.3021 - accuracy: 0.0873 Epoch 00007: val_accuracy improved from 0.02593 to 0.03082, saving model to /content/drive/MyDrive/dog breed/Models/Model_4.h5 256/256 [==============================] - 60s 233ms/step - loss: 4.3021 - accuracy: 0.0873 - val_loss: 4.7314 - val_accuracy: 0.0308 Epoch 8/30 256/256 [==============================] - ETA: 0s - loss: 3.8442 - accuracy: 0.1308 Epoch 00008: val_accuracy improved from 0.03082 to 0.04110, saving model to /content/drive/MyDrive/dog breed/Models/Model_4.h5 256/256 [==============================] - 60s 234ms/step - loss: 3.8442 - accuracy: 0.1308 - val_loss: 4.6756 - val_accuracy: 0.0411 Epoch 9/30 256/256 [==============================] - ETA: 0s - loss: 3.6383 - accuracy: 0.1678 Epoch 00009: val_accuracy did not improve from 0.04110 256/256 [==============================] - 59s 230ms/step - loss: 3.6383 - accuracy: 0.1678 - val_loss: 4.7558 - val_accuracy: 0.0338 Epoch 10/30 256/256 [==============================] - ETA: 0s - loss: 3.5553 - accuracy: 0.1848 Epoch 00010: val_accuracy did not improve from 0.04110 256/256 [==============================] - 58s 227ms/step - loss: 3.5553 - accuracy: 0.1848 - val_loss: 4.8589 - val_accuracy: 0.0289 Epoch 11/30 256/256 [==============================] - ETA: 0s - loss: 3.4287 - accuracy: 0.2015 Epoch 00011: val_accuracy did not improve from 0.04110 256/256 [==============================] - 58s 228ms/step - loss: 3.4287 - accuracy: 0.2015 - val_loss: 4.8513 - val_accuracy: 0.0338 Epoch 12/30 256/256 [==============================] - ETA: 0s - loss: 3.3271 - accuracy: 0.2262 Epoch 00012: val_accuracy did not improve from 0.04110 256/256 [==============================] - 59s 230ms/step - loss: 3.3271 - accuracy: 0.2262 - val_loss: 4.8537 - val_accuracy: 0.0328 Epoch 13/30 256/256 [==============================] - ETA: 0s - loss: 3.0594 - accuracy: 0.2591 Epoch 00013: val_accuracy did not improve from 0.04110 256/256 [==============================] - 59s 232ms/step - loss: 3.0594 - accuracy: 0.2591 - val_loss: 4.8388 - val_accuracy: 0.0338 Epoch 14/30 256/256 [==============================] - ETA: 0s - loss: 2.8558 - accuracy: 0.2973 Epoch 00014: val_accuracy did not improve from 0.04110 256/256 [==============================] - 59s 229ms/step - loss: 2.8558 - accuracy: 0.2973 - val_loss: 4.9772 - val_accuracy: 0.0333 Epoch 15/30 256/256 [==============================] - ETA: 0s - loss: 2.6559 - accuracy: 0.3437 Epoch 00015: val_accuracy did not improve from 0.04110 256/256 [==============================] - 59s 229ms/step - loss: 2.6559 - accuracy: 0.3437 - val_loss: 4.9725 - val_accuracy: 0.0342 Epoch 16/30 256/256 [==============================] - ETA: 0s - loss: 2.5017 - accuracy: 0.3748 Epoch 00016: val_accuracy did not improve from 0.04110 256/256 [==============================] - 58s 228ms/step - loss: 2.5017 - accuracy: 0.3748 - val_loss: 5.1926 - val_accuracy: 0.0303 Epoch 17/30 256/256 [==============================] - ETA: 0s - loss: 2.3718 - accuracy: 0.3959 Epoch 00017: val_accuracy did not improve from 0.04110 256/256 [==============================] - 59s 230ms/step - loss: 2.3718 - accuracy: 0.3959 - val_loss: 5.0872 - val_accuracy: 0.0382 Epoch 18/30 256/256 [==============================] - ETA: 0s - loss: 2.2246 - accuracy: 0.4329 Epoch 00018: val_accuracy did not improve from 0.04110 256/256 [==============================] - 59s 230ms/step - loss: 2.2246 - accuracy: 0.4329 - val_loss: 5.1928 - val_accuracy: 0.0323
<Figure size 432x288 with 0 Axes>
64/64 [==============================] - 10s 162ms/step - loss: 4.6756 - accuracy: 0.0411 Start FOLD Number 5 Found 8178 validated image filenames belonging to 120 classes. Found 2044 validated image filenames belonging to 120 classes. Epoch 1/30 256/256 [==============================] - ETA: 0s - loss: 9.9170 - accuracy: 0.0135 Epoch 00001: val_accuracy improved from -inf to 0.01321, saving model to /content/drive/MyDrive/dog breed/Models/Model_5.h5 256/256 [==============================] - 60s 233ms/step - loss: 9.9170 - accuracy: 0.0135 - val_loss: 7.8691 - val_accuracy: 0.0132 Epoch 2/30 256/256 [==============================] - ETA: 0s - loss: 9.0136 - accuracy: 0.0191 Epoch 00002: val_accuracy did not improve from 0.01321 256/256 [==============================] - 60s 234ms/step - loss: 9.0136 - accuracy: 0.0191 - val_loss: 7.3210 - val_accuracy: 0.0098 Epoch 3/30 256/256 [==============================] - ETA: 0s - loss: 7.2822 - accuracy: 0.0258 Epoch 00003: val_accuracy did not improve from 0.01321 256/256 [==============================] - 58s 228ms/step - loss: 7.2822 - accuracy: 0.0258 - val_loss: 6.3477 - val_accuracy: 0.0117 Epoch 4/30 256/256 [==============================] - ETA: 0s - loss: 6.1329 - accuracy: 0.0335 Epoch 00004: val_accuracy did not improve from 0.01321 256/256 [==============================] - 58s 228ms/step - loss: 6.1329 - accuracy: 0.0335 - val_loss: 6.1551 - val_accuracy: 0.0108 Epoch 5/30 256/256 [==============================] - ETA: 0s - loss: 5.1342 - accuracy: 0.0444 Epoch 00005: val_accuracy improved from 0.01321 to 0.01859, saving model to /content/drive/MyDrive/dog breed/Models/Model_5.h5 256/256 [==============================] - 60s 235ms/step - loss: 5.1342 - accuracy: 0.0444 - val_loss: 4.8984 - val_accuracy: 0.0186 Epoch 6/30 256/256 [==============================] - ETA: 0s - loss: 4.3252 - accuracy: 0.0726 Epoch 00006: val_accuracy did not improve from 0.01859 256/256 [==============================] - 59s 231ms/step - loss: 4.3252 - accuracy: 0.0726 - val_loss: 4.8136 - val_accuracy: 0.0142 Epoch 7/30 256/256 [==============================] - ETA: 0s - loss: 4.0989 - accuracy: 0.1001 Epoch 00007: val_accuracy did not improve from 0.01859 256/256 [==============================] - 59s 230ms/step - loss: 4.0989 - accuracy: 0.1001 - val_loss: 4.8728 - val_accuracy: 0.0157 Epoch 8/30 256/256 [==============================] - ETA: 0s - loss: 3.9382 - accuracy: 0.1226 Epoch 00008: val_accuracy did not improve from 0.01859 256/256 [==============================] - 59s 229ms/step - loss: 3.9382 - accuracy: 0.1226 - val_loss: 4.9389 - val_accuracy: 0.0113 Epoch 9/30 256/256 [==============================] - ETA: 0s - loss: 3.7714 - accuracy: 0.1437 Epoch 00009: val_accuracy did not improve from 0.01859 256/256 [==============================] - 59s 231ms/step - loss: 3.7714 - accuracy: 0.1437 - val_loss: 4.9792 - val_accuracy: 0.0132 Epoch 10/30 256/256 [==============================] - ETA: 0s - loss: 3.6132 - accuracy: 0.1724 Epoch 00010: val_accuracy did not improve from 0.01859 256/256 [==============================] - 59s 231ms/step - loss: 3.6132 - accuracy: 0.1724 - val_loss: 5.1199 - val_accuracy: 0.0132 Epoch 11/30 256/256 [==============================] - ETA: 0s - loss: 3.5932 - accuracy: 0.1785 Epoch 00011: val_accuracy did not improve from 0.01859 256/256 [==============================] - 59s 231ms/step - loss: 3.5932 - accuracy: 0.1785 - val_loss: 5.0906 - val_accuracy: 0.0161 Epoch 12/30 256/256 [==============================] - ETA: 0s - loss: 3.3375 - accuracy: 0.2190 Epoch 00012: val_accuracy improved from 0.01859 to 0.01957, saving model to /content/drive/MyDrive/dog breed/Models/Model_5.h5 256/256 [==============================] - 60s 233ms/step - loss: 3.3375 - accuracy: 0.2190 - val_loss: 5.0960 - val_accuracy: 0.0196 Epoch 13/30 256/256 [==============================] - ETA: 0s - loss: 3.2650 - accuracy: 0.2408 Epoch 00013: val_accuracy did not improve from 0.01957 256/256 [==============================] - 59s 231ms/step - loss: 3.2650 - accuracy: 0.2408 - val_loss: 8.2728 - val_accuracy: 0.0152 Epoch 14/30 256/256 [==============================] - ETA: 0s - loss: 3.5956 - accuracy: 0.2064 Epoch 00014: val_accuracy improved from 0.01957 to 0.02153, saving model to /content/drive/MyDrive/dog breed/Models/Model_5.h5 256/256 [==============================] - 60s 233ms/step - loss: 3.5956 - accuracy: 0.2064 - val_loss: 5.3221 - val_accuracy: 0.0215 Epoch 15/30 256/256 [==============================] - ETA: 0s - loss: 3.0474 - accuracy: 0.2702 Epoch 00015: val_accuracy improved from 0.02153 to 0.02299, saving model to /content/drive/MyDrive/dog breed/Models/Model_5.h5 256/256 [==============================] - 60s 236ms/step - loss: 3.0474 - accuracy: 0.2702 - val_loss: 5.1766 - val_accuracy: 0.0230 Epoch 16/30 256/256 [==============================] - ETA: 0s - loss: 2.7819 - accuracy: 0.3225 Epoch 00016: val_accuracy did not improve from 0.02299 256/256 [==============================] - 60s 233ms/step - loss: 2.7819 - accuracy: 0.3225 - val_loss: 5.3903 - val_accuracy: 0.0181 Epoch 17/30 256/256 [==============================] - ETA: 0s - loss: 2.6293 - accuracy: 0.3511 Epoch 00017: val_accuracy improved from 0.02299 to 0.03131, saving model to /content/drive/MyDrive/dog breed/Models/Model_5.h5 256/256 [==============================] - 60s 234ms/step - loss: 2.6293 - accuracy: 0.3511 - val_loss: 5.3848 - val_accuracy: 0.0313 Epoch 18/30 256/256 [==============================] - ETA: 0s - loss: 2.6829 - accuracy: 0.3451 Epoch 00018: val_accuracy did not improve from 0.03131 256/256 [==============================] - 59s 232ms/step - loss: 2.6829 - accuracy: 0.3451 - val_loss: 5.4887 - val_accuracy: 0.0235 Epoch 19/30 256/256 [==============================] - ETA: 0s - loss: 2.5236 - accuracy: 0.3796 Epoch 00019: val_accuracy did not improve from 0.03131 256/256 [==============================] - 59s 230ms/step - loss: 2.5236 - accuracy: 0.3796 - val_loss: 5.7304 - val_accuracy: 0.0230 Epoch 20/30 256/256 [==============================] - ETA: 0s - loss: 2.3500 - accuracy: 0.4140 Epoch 00020: val_accuracy did not improve from 0.03131 256/256 [==============================] - 58s 228ms/step - loss: 2.3500 - accuracy: 0.4140 - val_loss: 5.6644 - val_accuracy: 0.0210 Epoch 21/30 256/256 [==============================] - ETA: 0s - loss: 2.3664 - accuracy: 0.4156 Epoch 00021: val_accuracy did not improve from 0.03131 256/256 [==============================] - 59s 231ms/step - loss: 2.3664 - accuracy: 0.4156 - val_loss: 5.6632 - val_accuracy: 0.0264 Epoch 22/30 256/256 [==============================] - ETA: 0s - loss: 2.1962 - accuracy: 0.4469 Epoch 00022: val_accuracy did not improve from 0.03131 256/256 [==============================] - 59s 229ms/step - loss: 2.1962 - accuracy: 0.4469 - val_loss: 5.7775 - val_accuracy: 0.0220 Epoch 23/30 256/256 [==============================] - ETA: 0s - loss: 2.0654 - accuracy: 0.4725 Epoch 00023: val_accuracy did not improve from 0.03131 256/256 [==============================] - 59s 229ms/step - loss: 2.0654 - accuracy: 0.4725 - val_loss: 5.8986 - val_accuracy: 0.0181 Epoch 24/30 256/256 [==============================] - ETA: 0s - loss: 1.9264 - accuracy: 0.5031 Epoch 00024: val_accuracy did not improve from 0.03131 256/256 [==============================] - 59s 230ms/step - loss: 1.9264 - accuracy: 0.5031 - val_loss: 6.0854 - val_accuracy: 0.0181 Epoch 25/30 256/256 [==============================] - ETA: 0s - loss: 1.8271 - accuracy: 0.5248 Epoch 00025: val_accuracy did not improve from 0.03131 256/256 [==============================] - 59s 231ms/step - loss: 1.8271 - accuracy: 0.5248 - val_loss: 5.9772 - val_accuracy: 0.0205 Epoch 26/30 256/256 [==============================] - ETA: 0s - loss: 1.7807 - accuracy: 0.5372 Epoch 00026: val_accuracy did not improve from 0.03131 256/256 [==============================] - 59s 232ms/step - loss: 1.7807 - accuracy: 0.5372 - val_loss: 6.8846 - val_accuracy: 0.0308 Epoch 27/30 256/256 [==============================] - ETA: 0s - loss: 1.7397 - accuracy: 0.5373 Epoch 00027: val_accuracy did not improve from 0.03131 256/256 [==============================] - 59s 230ms/step - loss: 1.7397 - accuracy: 0.5373 - val_loss: 6.1097 - val_accuracy: 0.0201
<Figure size 432x288 with 0 Axes>
64/64 [==============================] - 11s 173ms/step - loss: 5.3848 - accuracy: 0.0313 64/64 [==============================] - 11s 172ms/step
<Figure size 432x288 with 0 Axes>
By the result from the second model we see that the accuracy and loss is low and high respectively again, but the overfitting increade gradually and not all at once.
In the second improvement we decided to add more convolution layers with a larger number of filters in order to get more depth and more features. In addition we added a layer of Batch Normalization after each layer of convolution and after each layer of Dense - i.e. after each layer performing a particular calculation.
More than that, after the Flatten layer we decided to perform several Dense layers in order to gradually reduce the number of features we will get in each layer and in this way we will have the significant features to perform the classification.
def get_third_model():
model = Sequential()
model.add(Conv2D(32, (3, 3),activation='relu', input_shape=(300,300,3)))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Conv2D(64, (3, 3),activation='relu'))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Conv2D(128, (3, 3),activation='relu'))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Conv2D(512, (3, 3),activation='relu'))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Conv2D(1024, (3, 3),activation='relu'))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Conv2D(2048, (3, 3),activation='relu'))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(512, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(120, activation='softmax'))
model.summary()
return model
Model: "sequential_5" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_30 (Conv2D) (None, 98, 98, 32) 896 _________________________________________________________________ batch_normalization_28 (Batc (None, 98, 98, 32) 128 _________________________________________________________________ max_pooling2d_26 (MaxPooling (None, 49, 49, 32) 0 _________________________________________________________________ dropout_28 (Dropout) (None, 49, 49, 32) 0 _________________________________________________________________ conv2d_31 (Conv2D) (None, 47, 47, 64) 18496 _________________________________________________________________ batch_normalization_29 (Batc (None, 47, 47, 64) 256 _________________________________________________________________ max_pooling2d_27 (MaxPooling (None, 23, 23, 64) 0 _________________________________________________________________ dropout_29 (Dropout) (None, 23, 23, 64) 0 _________________________________________________________________ conv2d_32 (Conv2D) (None, 21, 21, 128) 73856 _________________________________________________________________ batch_normalization_30 (Batc (None, 21, 21, 128) 512 _________________________________________________________________ max_pooling2d_28 (MaxPooling (None, 10, 10, 128) 0 _________________________________________________________________ dropout_30 (Dropout) (None, 10, 10, 128) 0 _________________________________________________________________ conv2d_33 (Conv2D) (None, 8, 8, 512) 590336 _________________________________________________________________ batch_normalization_31 (Batc (None, 8, 8, 512) 2048 _________________________________________________________________ max_pooling2d_29 (MaxPooling (None, 4, 4, 512) 0 _________________________________________________________________ dropout_31 (Dropout) (None, 4, 4, 512) 0 _________________________________________________________________ conv2d_34 (Conv2D) (None, 2, 2, 1024) 4719616 _________________________________________________________________ batch_normalization_32 (Batc (None, 2, 2, 1024) 4096 _________________________________________________________________ max_pooling2d_30 (MaxPooling (None, 1, 1, 1024) 0 _________________________________________________________________ dropout_32 (Dropout) (None, 1, 1, 1024) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 1024) 0 _________________________________________________________________ dense_3 (Dense) (None, 1024) 1049600 _________________________________________________________________ batch_normalization_33 (Batc (None, 1024) 4096 _________________________________________________________________ dropout_33 (Dropout) (None, 1024) 0 _________________________________________________________________ dense_4 (Dense) (None, 512) 524800 _________________________________________________________________ batch_normalization_34 (Batc (None, 512) 2048 _________________________________________________________________ dropout_34 (Dropout) (None, 512) 0 _________________________________________________________________ dense_5 (Dense) (None, 120) 61560 ================================================================= Total params: 7,052,344 Trainable params: 7,045,752 Non-trainable params: 6,592 _________________________________________________________________
<tensorflow.python.keras.engine.sequential.Sequential at 0x7f34ad742f28>
third_model = fit_model_5Fold(3,(300,300),False)
Start FOLD Number 1 Found 8177 validated image filenames belonging to 120 classes. Found 2045 validated image filenames belonging to 120 classes. Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 298, 298, 32) 896 _________________________________________________________________ batch_normalization (BatchNo (None, 298, 298, 32) 128 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 149, 149, 32) 0 _________________________________________________________________ dropout (Dropout) (None, 149, 149, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 147, 147, 64) 18496 _________________________________________________________________ batch_normalization_1 (Batch (None, 147, 147, 64) 256 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 73, 73, 64) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 73, 73, 64) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 71, 71, 128) 73856 _________________________________________________________________ batch_normalization_2 (Batch (None, 71, 71, 128) 512 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 35, 35, 128) 0 _________________________________________________________________ dropout_2 (Dropout) (None, 35, 35, 128) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 33, 33, 512) 590336 _________________________________________________________________ batch_normalization_3 (Batch (None, 33, 33, 512) 2048 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 16, 16, 512) 0 _________________________________________________________________ dropout_3 (Dropout) (None, 16, 16, 512) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 14, 14, 1024) 4719616 _________________________________________________________________ batch_normalization_4 (Batch (None, 14, 14, 1024) 4096 _________________________________________________________________ max_pooling2d_4 (MaxPooling2 (None, 7, 7, 1024) 0 _________________________________________________________________ dropout_4 (Dropout) (None, 7, 7, 1024) 0 _________________________________________________________________ conv2d_5 (Conv2D) (None, 5, 5, 2048) 18876416 _________________________________________________________________ batch_normalization_5 (Batch (None, 5, 5, 2048) 8192 _________________________________________________________________ max_pooling2d_5 (MaxPooling2 (None, 2, 2, 2048) 0 _________________________________________________________________ dropout_5 (Dropout) (None, 2, 2, 2048) 0 _________________________________________________________________ flatten (Flatten) (None, 8192) 0 _________________________________________________________________ dense (Dense) (None, 1024) 8389632 _________________________________________________________________ batch_normalization_6 (Batch (None, 1024) 4096 _________________________________________________________________ dropout_6 (Dropout) (None, 1024) 0 _________________________________________________________________ dense_1 (Dense) (None, 512) 524800 _________________________________________________________________ batch_normalization_7 (Batch (None, 512) 2048 _________________________________________________________________ dropout_7 (Dropout) (None, 512) 0 _________________________________________________________________ dense_2 (Dense) (None, 120) 61560 ================================================================= Total params: 33,276,984 Trainable params: 33,266,296 Non-trainable params: 10,688 _________________________________________________________________ Epoch 1/30 256/256 [==============================] - ETA: 0s - loss: 5.6371 - accuracy: 0.0142 Epoch 00001: val_accuracy improved from -inf to 0.01760, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 1982s 8s/step - loss: 5.6371 - accuracy: 0.0142 - val_loss: 5.0860 - val_accuracy: 0.0176 Epoch 2/30 256/256 [==============================] - ETA: 0s - loss: 5.2457 - accuracy: 0.0191 Epoch 00002: val_accuracy did not improve from 0.01760 256/256 [==============================] - 65s 254ms/step - loss: 5.2457 - accuracy: 0.0191 - val_loss: 5.0035 - val_accuracy: 0.0166 Epoch 3/30 256/256 [==============================] - ETA: 0s - loss: 4.9778 - accuracy: 0.0296 Epoch 00003: val_accuracy improved from 0.01760 to 0.02103, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 66s 259ms/step - loss: 4.9778 - accuracy: 0.0296 - val_loss: 4.7916 - val_accuracy: 0.0210 Epoch 4/30 256/256 [==============================] - ETA: 0s - loss: 4.7862 - accuracy: 0.0372 Epoch 00004: val_accuracy improved from 0.02103 to 0.02543, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 67s 262ms/step - loss: 4.7862 - accuracy: 0.0372 - val_loss: 4.8218 - val_accuracy: 0.0254 Epoch 5/30 256/256 [==============================] - ETA: 0s - loss: 4.6103 - accuracy: 0.0478 Epoch 00005: val_accuracy improved from 0.02543 to 0.02689, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 67s 261ms/step - loss: 4.6103 - accuracy: 0.0478 - val_loss: 4.8758 - val_accuracy: 0.0269 Epoch 6/30 256/256 [==============================] - ETA: 0s - loss: 4.4798 - accuracy: 0.0528 Epoch 00006: val_accuracy improved from 0.02689 to 0.03032, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 67s 262ms/step - loss: 4.4798 - accuracy: 0.0528 - val_loss: 4.7582 - val_accuracy: 0.0303 Epoch 7/30 256/256 [==============================] - ETA: 0s - loss: 4.3330 - accuracy: 0.0629 Epoch 00007: val_accuracy did not improve from 0.03032 256/256 [==============================] - 66s 256ms/step - loss: 4.3330 - accuracy: 0.0629 - val_loss: 4.9956 - val_accuracy: 0.0230 Epoch 8/30 256/256 [==============================] - ETA: 0s - loss: 4.2180 - accuracy: 0.0728 Epoch 00008: val_accuracy improved from 0.03032 to 0.04939, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 66s 260ms/step - loss: 4.2180 - accuracy: 0.0728 - val_loss: 4.6084 - val_accuracy: 0.0494 Epoch 9/30 256/256 [==============================] - ETA: 0s - loss: 4.1189 - accuracy: 0.0800 Epoch 00009: val_accuracy did not improve from 0.04939 256/256 [==============================] - 65s 255ms/step - loss: 4.1189 - accuracy: 0.0800 - val_loss: 5.3320 - val_accuracy: 0.0166 Epoch 10/30 256/256 [==============================] - ETA: 0s - loss: 4.0237 - accuracy: 0.0885 Epoch 00010: val_accuracy did not improve from 0.04939 256/256 [==============================] - 65s 254ms/step - loss: 4.0237 - accuracy: 0.0885 - val_loss: 4.7347 - val_accuracy: 0.0489 Epoch 11/30 256/256 [==============================] - ETA: 0s - loss: 3.9303 - accuracy: 0.1017 Epoch 00011: val_accuracy improved from 0.04939 to 0.06993, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 66s 258ms/step - loss: 3.9303 - accuracy: 0.1017 - val_loss: 4.2829 - val_accuracy: 0.0699 Epoch 12/30 256/256 [==============================] - ETA: 0s - loss: 3.8480 - accuracy: 0.1106 Epoch 00012: val_accuracy improved from 0.06993 to 0.07237, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 67s 260ms/step - loss: 3.8480 - accuracy: 0.1106 - val_loss: 4.1565 - val_accuracy: 0.0724 Epoch 13/30 256/256 [==============================] - ETA: 0s - loss: 3.7372 - accuracy: 0.1274 Epoch 00013: val_accuracy improved from 0.07237 to 0.07433, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 67s 261ms/step - loss: 3.7372 - accuracy: 0.1274 - val_loss: 4.2871 - val_accuracy: 0.0743 Epoch 14/30 256/256 [==============================] - ETA: 0s - loss: 3.6541 - accuracy: 0.1339 Epoch 00014: val_accuracy improved from 0.07433 to 0.08166, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 67s 260ms/step - loss: 3.6541 - accuracy: 0.1339 - val_loss: 4.1758 - val_accuracy: 0.0817 Epoch 15/30 256/256 [==============================] - ETA: 0s - loss: 3.5318 - accuracy: 0.1567 Epoch 00015: val_accuracy did not improve from 0.08166 256/256 [==============================] - 65s 254ms/step - loss: 3.5318 - accuracy: 0.1567 - val_loss: 4.2083 - val_accuracy: 0.0778 Epoch 16/30 256/256 [==============================] - ETA: 0s - loss: 3.4561 - accuracy: 0.1662 Epoch 00016: val_accuracy did not improve from 0.08166 256/256 [==============================] - 64s 250ms/step - loss: 3.4561 - accuracy: 0.1662 - val_loss: 4.2915 - val_accuracy: 0.0807 Epoch 17/30 256/256 [==============================] - ETA: 0s - loss: 3.3744 - accuracy: 0.1705 Epoch 00017: val_accuracy improved from 0.08166 to 0.09046, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 66s 258ms/step - loss: 3.3744 - accuracy: 0.1705 - val_loss: 4.1486 - val_accuracy: 0.0905 Epoch 18/30 256/256 [==============================] - ETA: 0s - loss: 3.3093 - accuracy: 0.1949 Epoch 00018: val_accuracy improved from 0.09046 to 0.09144, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 66s 259ms/step - loss: 3.3093 - accuracy: 0.1949 - val_loss: 4.2012 - val_accuracy: 0.0914 Epoch 19/30 256/256 [==============================] - ETA: 0s - loss: 3.2060 - accuracy: 0.2154 Epoch 00019: val_accuracy improved from 0.09144 to 0.09780, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 66s 260ms/step - loss: 3.2060 - accuracy: 0.2154 - val_loss: 4.2770 - val_accuracy: 0.0978 Epoch 20/30 256/256 [==============================] - ETA: 0s - loss: 3.1276 - accuracy: 0.2194 Epoch 00020: val_accuracy did not improve from 0.09780 256/256 [==============================] - 65s 254ms/step - loss: 3.1276 - accuracy: 0.2194 - val_loss: 4.4652 - val_accuracy: 0.0724 Epoch 21/30 256/256 [==============================] - ETA: 0s - loss: 3.0782 - accuracy: 0.2401 Epoch 00021: val_accuracy improved from 0.09780 to 0.09976, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 67s 260ms/step - loss: 3.0782 - accuracy: 0.2401 - val_loss: 4.0037 - val_accuracy: 0.0998 Epoch 22/30 256/256 [==============================] - ETA: 0s - loss: 3.0049 - accuracy: 0.2456 Epoch 00022: val_accuracy improved from 0.09976 to 0.13350, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 67s 261ms/step - loss: 3.0049 - accuracy: 0.2456 - val_loss: 3.8755 - val_accuracy: 0.1335 Epoch 23/30 256/256 [==============================] - ETA: 0s - loss: 2.8774 - accuracy: 0.2670 Epoch 00023: val_accuracy did not improve from 0.13350 256/256 [==============================] - 65s 254ms/step - loss: 2.8774 - accuracy: 0.2670 - val_loss: 4.0108 - val_accuracy: 0.1286 Epoch 24/30 256/256 [==============================] - ETA: 0s - loss: 2.7380 - accuracy: 0.3029 Epoch 00024: val_accuracy did not improve from 0.13350 256/256 [==============================] - 65s 253ms/step - loss: 2.7380 - accuracy: 0.3029 - val_loss: 4.7700 - val_accuracy: 0.0601 Epoch 25/30 256/256 [==============================] - ETA: 0s - loss: 2.7583 - accuracy: 0.2975 Epoch 00025: val_accuracy did not improve from 0.13350 256/256 [==============================] - 64s 252ms/step - loss: 2.7583 - accuracy: 0.2975 - val_loss: 3.9580 - val_accuracy: 0.1227 Epoch 26/30 256/256 [==============================] - ETA: 0s - loss: 2.6182 - accuracy: 0.3265 Epoch 00026: val_accuracy did not improve from 0.13350 256/256 [==============================] - 65s 253ms/step - loss: 2.6182 - accuracy: 0.3265 - val_loss: 4.3008 - val_accuracy: 0.1076 Epoch 27/30 256/256 [==============================] - ETA: 0s - loss: 2.5287 - accuracy: 0.3399 Epoch 00027: val_accuracy improved from 0.13350 to 0.13839, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 66s 259ms/step - loss: 2.5287 - accuracy: 0.3399 - val_loss: 3.9413 - val_accuracy: 0.1384 Epoch 28/30 256/256 [==============================] - ETA: 0s - loss: 2.4363 - accuracy: 0.3593 Epoch 00028: val_accuracy did not improve from 0.13839 256/256 [==============================] - 65s 255ms/step - loss: 2.4363 - accuracy: 0.3593 - val_loss: 4.2912 - val_accuracy: 0.1149 Epoch 29/30 256/256 [==============================] - ETA: 0s - loss: 2.3003 - accuracy: 0.3885 Epoch 00029: val_accuracy did not improve from 0.13839 256/256 [==============================] - 65s 253ms/step - loss: 2.3003 - accuracy: 0.3885 - val_loss: 4.0731 - val_accuracy: 0.1335 Epoch 30/30 256/256 [==============================] - ETA: 0s - loss: 2.2072 - accuracy: 0.4061 Epoch 00030: val_accuracy did not improve from 0.13839 256/256 [==============================] - 65s 253ms/step - loss: 2.2072 - accuracy: 0.4061 - val_loss: 4.2167 - val_accuracy: 0.1330
64/64 [==============================] - 10s 162ms/step - loss: 3.9413 - accuracy: 0.1384 Start FOLD Number 2 Found 8177 validated image filenames belonging to 120 classes. Found 2045 validated image filenames belonging to 120 classes. Epoch 1/30 256/256 [==============================] - ETA: 0s - loss: 5.6621 - accuracy: 0.0108 Epoch 00001: val_accuracy improved from -inf to 0.01320, saving model to /content/drive/MyDrive/Data/Models/Model_2.h5 256/256 [==============================] - 67s 261ms/step - loss: 5.6621 - accuracy: 0.0108 - val_loss: 5.3949 - val_accuracy: 0.0132 Epoch 2/30 256/256 [==============================] - ETA: 0s - loss: 5.2262 - accuracy: 0.0204 Epoch 00002: val_accuracy improved from 0.01320 to 0.02543, saving model to /content/drive/MyDrive/Data/Models/Model_2.h5 256/256 [==============================] - 68s 264ms/step - loss: 5.2262 - accuracy: 0.0204 - val_loss: 4.7207 - val_accuracy: 0.0254 Epoch 3/30 256/256 [==============================] - ETA: 0s - loss: 5.0018 - accuracy: 0.0275 Epoch 00003: val_accuracy improved from 0.02543 to 0.03325, saving model to /content/drive/MyDrive/Data/Models/Model_2.h5 256/256 [==============================] - 67s 261ms/step - loss: 5.0018 - accuracy: 0.0275 - val_loss: 4.6571 - val_accuracy: 0.0333 Epoch 4/30 256/256 [==============================] - ETA: 0s - loss: 4.8237 - accuracy: 0.0319 Epoch 00004: val_accuracy did not improve from 0.03325 256/256 [==============================] - 65s 255ms/step - loss: 4.8237 - accuracy: 0.0319 - val_loss: 4.7683 - val_accuracy: 0.0215 Epoch 5/30 256/256 [==============================] - ETA: 0s - loss: 4.6974 - accuracy: 0.0396 Epoch 00005: val_accuracy did not improve from 0.03325 256/256 [==============================] - 65s 253ms/step - loss: 4.6974 - accuracy: 0.0396 - val_loss: 4.6990 - val_accuracy: 0.0249 Epoch 6/30 256/256 [==============================] - ETA: 0s - loss: 4.5141 - accuracy: 0.0487 Epoch 00006: val_accuracy did not improve from 0.03325 256/256 [==============================] - 65s 252ms/step - loss: 4.5141 - accuracy: 0.0487 - val_loss: 4.6872 - val_accuracy: 0.0308 Epoch 7/30 256/256 [==============================] - ETA: 0s - loss: 4.3997 - accuracy: 0.0580 Epoch 00007: val_accuracy improved from 0.03325 to 0.04890, saving model to /content/drive/MyDrive/Data/Models/Model_2.h5 256/256 [==============================] - 66s 257ms/step - loss: 4.3997 - accuracy: 0.0580 - val_loss: 4.4236 - val_accuracy: 0.0489 Epoch 8/30 256/256 [==============================] - ETA: 0s - loss: 4.2430 - accuracy: 0.0681 Epoch 00008: val_accuracy did not improve from 0.04890 256/256 [==============================] - 65s 253ms/step - loss: 4.2430 - accuracy: 0.0681 - val_loss: 4.5699 - val_accuracy: 0.0352 Epoch 9/30 256/256 [==============================] - ETA: 0s - loss: 4.1724 - accuracy: 0.0741 Epoch 00009: val_accuracy did not improve from 0.04890 256/256 [==============================] - 64s 251ms/step - loss: 4.1724 - accuracy: 0.0741 - val_loss: 4.3565 - val_accuracy: 0.0474 Epoch 10/30 256/256 [==============================] - ETA: 0s - loss: 4.0464 - accuracy: 0.0903 Epoch 00010: val_accuracy did not improve from 0.04890 256/256 [==============================] - 64s 252ms/step - loss: 4.0464 - accuracy: 0.0903 - val_loss: 4.7021 - val_accuracy: 0.0396 Epoch 11/30 256/256 [==============================] - ETA: 0s - loss: 3.9418 - accuracy: 0.0982 Epoch 00011: val_accuracy improved from 0.04890 to 0.05183, saving model to /content/drive/MyDrive/Data/Models/Model_2.h5 256/256 [==============================] - 66s 258ms/step - loss: 3.9418 - accuracy: 0.0982 - val_loss: 4.4973 - val_accuracy: 0.0518 Epoch 12/30 256/256 [==============================] - ETA: 0s - loss: 3.8589 - accuracy: 0.1124 Epoch 00012: val_accuracy did not improve from 0.05183 256/256 [==============================] - 65s 253ms/step - loss: 3.8589 - accuracy: 0.1124 - val_loss: 4.5240 - val_accuracy: 0.0499 Epoch 13/30 256/256 [==============================] - ETA: 0s - loss: 3.7568 - accuracy: 0.1197 Epoch 00013: val_accuracy improved from 0.05183 to 0.08264, saving model to /content/drive/MyDrive/Data/Models/Model_2.h5 256/256 [==============================] - 66s 257ms/step - loss: 3.7568 - accuracy: 0.1197 - val_loss: 4.1310 - val_accuracy: 0.0826 Epoch 14/30 256/256 [==============================] - ETA: 0s - loss: 3.6768 - accuracy: 0.1322 Epoch 00014: val_accuracy did not improve from 0.08264 256/256 [==============================] - 65s 254ms/step - loss: 3.6768 - accuracy: 0.1322 - val_loss: 4.2433 - val_accuracy: 0.0714 Epoch 15/30 256/256 [==============================] - ETA: 0s - loss: 3.5726 - accuracy: 0.1458 Epoch 00015: val_accuracy improved from 0.08264 to 0.10024, saving model to /content/drive/MyDrive/Data/Models/Model_2.h5 256/256 [==============================] - 66s 259ms/step - loss: 3.5726 - accuracy: 0.1458 - val_loss: 3.9074 - val_accuracy: 0.1002 Epoch 16/30 256/256 [==============================] - ETA: 0s - loss: 3.4855 - accuracy: 0.1611 Epoch 00016: val_accuracy did not improve from 0.10024 256/256 [==============================] - 65s 253ms/step - loss: 3.4855 - accuracy: 0.1611 - val_loss: 4.1021 - val_accuracy: 0.0983 Epoch 17/30 256/256 [==============================] - ETA: 0s - loss: 3.4122 - accuracy: 0.1749 Epoch 00017: val_accuracy improved from 0.10024 to 0.12421, saving model to /content/drive/MyDrive/Data/Models/Model_2.h5 256/256 [==============================] - 66s 258ms/step - loss: 3.4122 - accuracy: 0.1749 - val_loss: 3.8531 - val_accuracy: 0.1242 Epoch 18/30 256/256 [==============================] - ETA: 0s - loss: 3.3148 - accuracy: 0.1979 Epoch 00018: val_accuracy did not improve from 0.12421 256/256 [==============================] - 65s 255ms/step - loss: 3.3148 - accuracy: 0.1979 - val_loss: 3.9322 - val_accuracy: 0.1061 Epoch 19/30 256/256 [==============================] - ETA: 0s - loss: 3.2545 - accuracy: 0.2013 Epoch 00019: val_accuracy did not improve from 0.12421 256/256 [==============================] - 65s 253ms/step - loss: 3.2545 - accuracy: 0.2013 - val_loss: 4.3937 - val_accuracy: 0.0792 Epoch 20/30 256/256 [==============================] - ETA: 0s - loss: 3.1456 - accuracy: 0.2244 Epoch 00020: val_accuracy did not improve from 0.12421 256/256 [==============================] - 64s 252ms/step - loss: 3.1456 - accuracy: 0.2244 - val_loss: 4.4818 - val_accuracy: 0.0954 Epoch 21/30 256/256 [==============================] - ETA: 0s - loss: 3.0789 - accuracy: 0.2326 Epoch 00021: val_accuracy did not improve from 0.12421 256/256 [==============================] - 64s 251ms/step - loss: 3.0789 - accuracy: 0.2326 - val_loss: 4.0430 - val_accuracy: 0.1105 Epoch 22/30 256/256 [==============================] - ETA: 0s - loss: 2.9858 - accuracy: 0.2524 Epoch 00022: val_accuracy did not improve from 0.12421 256/256 [==============================] - 64s 251ms/step - loss: 2.9858 - accuracy: 0.2524 - val_loss: 4.2229 - val_accuracy: 0.0890 Epoch 23/30 256/256 [==============================] - ETA: 0s - loss: 2.9307 - accuracy: 0.2560 Epoch 00023: val_accuracy improved from 0.12421 to 0.13594, saving model to /content/drive/MyDrive/Data/Models/Model_2.h5 256/256 [==============================] - 66s 257ms/step - loss: 2.9307 - accuracy: 0.2560 - val_loss: 3.8741 - val_accuracy: 0.1359 Epoch 24/30 256/256 [==============================] - ETA: 0s - loss: 2.8097 - accuracy: 0.2803 Epoch 00024: val_accuracy improved from 0.13594 to 0.14181, saving model to /content/drive/MyDrive/Data/Models/Model_2.h5 256/256 [==============================] - 67s 260ms/step - loss: 2.8097 - accuracy: 0.2803 - val_loss: 3.8309 - val_accuracy: 0.1418 Epoch 25/30 256/256 [==============================] - ETA: 0s - loss: 2.7671 - accuracy: 0.2967 Epoch 00025: val_accuracy did not improve from 0.14181 256/256 [==============================] - 65s 255ms/step - loss: 2.7671 - accuracy: 0.2967 - val_loss: 5.3094 - val_accuracy: 0.0249 Epoch 26/30 256/256 [==============================] - ETA: 0s - loss: 2.8782 - accuracy: 0.2731 Epoch 00026: val_accuracy improved from 0.14181 to 0.15355, saving model to /content/drive/MyDrive/Data/Models/Model_2.h5 256/256 [==============================] - 66s 258ms/step - loss: 2.8782 - accuracy: 0.2731 - val_loss: 3.7728 - val_accuracy: 0.1535 Epoch 27/30 256/256 [==============================] - ETA: 0s - loss: 2.6120 - accuracy: 0.3277 Epoch 00027: val_accuracy did not improve from 0.15355 256/256 [==============================] - 65s 254ms/step - loss: 2.6120 - accuracy: 0.3277 - val_loss: 4.0624 - val_accuracy: 0.1355 Epoch 28/30 256/256 [==============================] - ETA: 0s - loss: 2.4341 - accuracy: 0.3614 Epoch 00028: val_accuracy improved from 0.15355 to 0.17311, saving model to /content/drive/MyDrive/Data/Models/Model_2.h5 256/256 [==============================] - 66s 258ms/step - loss: 2.4341 - accuracy: 0.3614 - val_loss: 3.7242 - val_accuracy: 0.1731 Epoch 29/30 256/256 [==============================] - ETA: 0s - loss: 2.3493 - accuracy: 0.3762 Epoch 00029: val_accuracy did not improve from 0.17311 256/256 [==============================] - 65s 254ms/step - loss: 2.3493 - accuracy: 0.3762 - val_loss: 4.0242 - val_accuracy: 0.1389 Epoch 30/30 256/256 [==============================] - ETA: 0s - loss: 2.2555 - accuracy: 0.3942 Epoch 00030: val_accuracy did not improve from 0.17311 256/256 [==============================] - 65s 252ms/step - loss: 2.2555 - accuracy: 0.3942 - val_loss: 3.7823 - val_accuracy: 0.1677
<Figure size 432x288 with 0 Axes>
64/64 [==============================] - 10s 160ms/step - loss: 3.7242 - accuracy: 0.1731 Start FOLD Number 3 Found 8178 validated image filenames belonging to 120 classes. Found 2044 validated image filenames belonging to 120 classes. Epoch 1/30 2/256 [..............................] - ETA: 26s - loss: 5.9680 - accuracy: 0.0156WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0828s vs `on_train_batch_end` time: 0.1257s). Check your callbacks. 256/256 [==============================] - ETA: 0s - loss: 5.6988 - accuracy: 0.0131 Epoch 00001: val_accuracy improved from -inf to 0.01566, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 69s 268ms/step - loss: 5.6988 - accuracy: 0.0131 - val_loss: 5.5091 - val_accuracy: 0.0157 Epoch 2/30 256/256 [==============================] - ETA: 0s - loss: 5.2285 - accuracy: 0.0224 Epoch 00002: val_accuracy improved from 0.01566 to 0.02202, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 67s 262ms/step - loss: 5.2285 - accuracy: 0.0224 - val_loss: 5.0477 - val_accuracy: 0.0220 Epoch 3/30 256/256 [==============================] - ETA: 0s - loss: 4.9738 - accuracy: 0.0317 Epoch 00003: val_accuracy did not improve from 0.02202 256/256 [==============================] - 66s 258ms/step - loss: 4.9738 - accuracy: 0.0317 - val_loss: 4.9796 - val_accuracy: 0.0191 Epoch 4/30 256/256 [==============================] - ETA: 0s - loss: 4.8319 - accuracy: 0.0331 Epoch 00004: val_accuracy improved from 0.02202 to 0.02984, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 67s 260ms/step - loss: 4.8319 - accuracy: 0.0331 - val_loss: 4.7194 - val_accuracy: 0.0298 Epoch 5/30 256/256 [==============================] - ETA: 0s - loss: 4.6584 - accuracy: 0.0426 Epoch 00005: val_accuracy improved from 0.02984 to 0.03229, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 67s 264ms/step - loss: 4.6584 - accuracy: 0.0426 - val_loss: 4.8289 - val_accuracy: 0.0323 Epoch 6/30 256/256 [==============================] - ETA: 0s - loss: 4.5124 - accuracy: 0.0471 Epoch 00006: val_accuracy improved from 0.03229 to 0.03474, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 67s 261ms/step - loss: 4.5124 - accuracy: 0.0471 - val_loss: 4.7964 - val_accuracy: 0.0347 Epoch 7/30 256/256 [==============================] - ETA: 0s - loss: 4.3725 - accuracy: 0.0550 Epoch 00007: val_accuracy improved from 0.03474 to 0.03571, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 67s 261ms/step - loss: 4.3725 - accuracy: 0.0550 - val_loss: 4.7738 - val_accuracy: 0.0357 Epoch 8/30 256/256 [==============================] - ETA: 0s - loss: 4.2485 - accuracy: 0.0657 Epoch 00008: val_accuracy did not improve from 0.03571 256/256 [==============================] - 66s 256ms/step - loss: 4.2485 - accuracy: 0.0657 - val_loss: 5.0210 - val_accuracy: 0.0328 Epoch 9/30 256/256 [==============================] - ETA: 0s - loss: 4.1124 - accuracy: 0.0794 Epoch 00009: val_accuracy improved from 0.03571 to 0.04697, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 66s 260ms/step - loss: 4.1124 - accuracy: 0.0794 - val_loss: 4.7929 - val_accuracy: 0.0470 Epoch 10/30 256/256 [==============================] - ETA: 0s - loss: 4.0066 - accuracy: 0.0879 Epoch 00010: val_accuracy improved from 0.04697 to 0.06311, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 67s 262ms/step - loss: 4.0066 - accuracy: 0.0879 - val_loss: 4.6756 - val_accuracy: 0.0631 Epoch 11/30 256/256 [==============================] - ETA: 0s - loss: 3.9130 - accuracy: 0.1036 Epoch 00011: val_accuracy did not improve from 0.06311 256/256 [==============================] - 66s 257ms/step - loss: 3.9130 - accuracy: 0.1036 - val_loss: 4.4449 - val_accuracy: 0.0519 Epoch 12/30 256/256 [==============================] - ETA: 0s - loss: 3.7675 - accuracy: 0.1215 Epoch 00012: val_accuracy improved from 0.06311 to 0.07926, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 67s 260ms/step - loss: 3.7675 - accuracy: 0.1215 - val_loss: 4.1942 - val_accuracy: 0.0793 Epoch 13/30 256/256 [==============================] - ETA: 0s - loss: 3.6985 - accuracy: 0.1311 Epoch 00013: val_accuracy did not improve from 0.07926 256/256 [==============================] - 66s 258ms/step - loss: 3.6985 - accuracy: 0.1311 - val_loss: 4.4041 - val_accuracy: 0.0631 Epoch 14/30 256/256 [==============================] - ETA: 0s - loss: 3.5819 - accuracy: 0.1541 Epoch 00014: val_accuracy improved from 0.07926 to 0.08562, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 67s 262ms/step - loss: 3.5819 - accuracy: 0.1541 - val_loss: 4.2872 - val_accuracy: 0.0856 Epoch 15/30 256/256 [==============================] - ETA: 0s - loss: 3.5042 - accuracy: 0.1579 Epoch 00015: val_accuracy improved from 0.08562 to 0.11252, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 67s 263ms/step - loss: 3.5042 - accuracy: 0.1579 - val_loss: 3.9390 - val_accuracy: 0.1125 Epoch 16/30 256/256 [==============================] - ETA: 0s - loss: 3.4212 - accuracy: 0.1742 Epoch 00016: val_accuracy did not improve from 0.11252 256/256 [==============================] - 66s 257ms/step - loss: 3.4212 - accuracy: 0.1742 - val_loss: 4.4652 - val_accuracy: 0.0881 Epoch 17/30 256/256 [==============================] - ETA: 0s - loss: 3.3094 - accuracy: 0.1868 Epoch 00017: val_accuracy did not improve from 0.11252 256/256 [==============================] - 65s 255ms/step - loss: 3.3094 - accuracy: 0.1868 - val_loss: 4.0189 - val_accuracy: 0.1052 Epoch 18/30 256/256 [==============================] - ETA: 0s - loss: 3.1978 - accuracy: 0.2150 Epoch 00018: val_accuracy did not improve from 0.11252 256/256 [==============================] - 65s 254ms/step - loss: 3.1978 - accuracy: 0.2150 - val_loss: 4.3475 - val_accuracy: 0.0866 Epoch 19/30 256/256 [==============================] - ETA: 0s - loss: 3.0974 - accuracy: 0.2268 Epoch 00019: val_accuracy improved from 0.11252 to 0.12329, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 67s 261ms/step - loss: 3.0974 - accuracy: 0.2268 - val_loss: 3.8459 - val_accuracy: 0.1233 Epoch 20/30 256/256 [==============================] - ETA: 0s - loss: 3.0317 - accuracy: 0.2416 Epoch 00020: val_accuracy improved from 0.12329 to 0.14139, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 67s 263ms/step - loss: 3.0317 - accuracy: 0.2416 - val_loss: 3.7959 - val_accuracy: 0.1414 Epoch 21/30 256/256 [==============================] - ETA: 0s - loss: 2.9381 - accuracy: 0.2570 Epoch 00021: val_accuracy improved from 0.14139 to 0.14432, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 67s 263ms/step - loss: 2.9381 - accuracy: 0.2570 - val_loss: 3.8784 - val_accuracy: 0.1443 Epoch 22/30 256/256 [==============================] - ETA: 0s - loss: 2.9232 - accuracy: 0.2607 Epoch 00022: val_accuracy did not improve from 0.14432 256/256 [==============================] - 66s 259ms/step - loss: 2.9232 - accuracy: 0.2607 - val_loss: 3.7883 - val_accuracy: 0.1404 Epoch 23/30 256/256 [==============================] - ETA: 0s - loss: 2.8034 - accuracy: 0.2808 Epoch 00023: val_accuracy did not improve from 0.14432 256/256 [==============================] - 65s 256ms/step - loss: 2.8034 - accuracy: 0.2808 - val_loss: 3.8654 - val_accuracy: 0.1350 Epoch 24/30 256/256 [==============================] - ETA: 0s - loss: 2.6603 - accuracy: 0.3079 Epoch 00024: val_accuracy did not improve from 0.14432 256/256 [==============================] - 65s 254ms/step - loss: 2.6603 - accuracy: 0.3079 - val_loss: 4.1672 - val_accuracy: 0.1267 Epoch 25/30 256/256 [==============================] - ETA: 0s - loss: 2.5718 - accuracy: 0.3286 Epoch 00025: val_accuracy improved from 0.14432 to 0.15900, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 67s 260ms/step - loss: 2.5718 - accuracy: 0.3286 - val_loss: 3.6880 - val_accuracy: 0.1590 Epoch 26/30 256/256 [==============================] - ETA: 0s - loss: 2.4379 - accuracy: 0.3540 Epoch 00026: val_accuracy did not improve from 0.15900 256/256 [==============================] - 66s 256ms/step - loss: 2.4379 - accuracy: 0.3540 - val_loss: 3.8593 - val_accuracy: 0.1370 Epoch 27/30 256/256 [==============================] - ETA: 0s - loss: 2.3205 - accuracy: 0.3753 Epoch 00027: val_accuracy did not improve from 0.15900 256/256 [==============================] - 65s 255ms/step - loss: 2.3205 - accuracy: 0.3753 - val_loss: 3.7807 - val_accuracy: 0.1556 Epoch 28/30 256/256 [==============================] - ETA: 0s - loss: 2.1913 - accuracy: 0.4098 Epoch 00028: val_accuracy did not improve from 0.15900 256/256 [==============================] - 65s 254ms/step - loss: 2.1913 - accuracy: 0.4098 - val_loss: 3.8448 - val_accuracy: 0.1522 Epoch 29/30 256/256 [==============================] - ETA: 0s - loss: 2.1289 - accuracy: 0.4230 Epoch 00029: val_accuracy did not improve from 0.15900 256/256 [==============================] - 65s 254ms/step - loss: 2.1289 - accuracy: 0.4230 - val_loss: 4.0227 - val_accuracy: 0.1575 Epoch 30/30 256/256 [==============================] - ETA: 0s - loss: 2.0793 - accuracy: 0.4265 Epoch 00030: val_accuracy improved from 0.15900 to 0.16879, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 67s 261ms/step - loss: 2.0793 - accuracy: 0.4265 - val_loss: 3.9304 - val_accuracy: 0.1688
<Figure size 432x288 with 0 Axes>
64/64 [==============================] - 11s 177ms/step - loss: 3.9304 - accuracy: 0.1688 Start FOLD Number 4 Found 8178 validated image filenames belonging to 120 classes. Found 2044 validated image filenames belonging to 120 classes. Epoch 1/30 2/256 [..............................] - ETA: 25s - loss: 6.5627 - accuracy: 0.0000e+00WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0798s vs `on_train_batch_end` time: 0.1227s). Check your callbacks. 256/256 [==============================] - ETA: 0s - loss: 5.6829 - accuracy: 0.0133 Epoch 00001: val_accuracy improved from -inf to 0.01272, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 67s 262ms/step - loss: 5.6829 - accuracy: 0.0133 - val_loss: 5.2944 - val_accuracy: 0.0127 Epoch 2/30 256/256 [==============================] - ETA: 0s - loss: 5.2283 - accuracy: 0.0193 Epoch 00002: val_accuracy improved from 0.01272 to 0.01810, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 69s 270ms/step - loss: 5.2283 - accuracy: 0.0193 - val_loss: 4.8328 - val_accuracy: 0.0181 Epoch 3/30 256/256 [==============================] - ETA: 0s - loss: 4.9902 - accuracy: 0.0262 Epoch 00003: val_accuracy improved from 0.01810 to 0.02202, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 68s 267ms/step - loss: 4.9902 - accuracy: 0.0262 - val_loss: 4.7064 - val_accuracy: 0.0220 Epoch 4/30 256/256 [==============================] - ETA: 0s - loss: 4.8212 - accuracy: 0.0353 Epoch 00004: val_accuracy improved from 0.02202 to 0.02886, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 68s 265ms/step - loss: 4.8212 - accuracy: 0.0353 - val_loss: 4.6990 - val_accuracy: 0.0289 Epoch 5/30 256/256 [==============================] - ETA: 0s - loss: 4.6954 - accuracy: 0.0402 Epoch 00005: val_accuracy did not improve from 0.02886 256/256 [==============================] - 65s 256ms/step - loss: 4.6954 - accuracy: 0.0402 - val_loss: 4.6895 - val_accuracy: 0.0245 Epoch 6/30 256/256 [==============================] - ETA: 0s - loss: 4.5471 - accuracy: 0.0488 Epoch 00006: val_accuracy improved from 0.02886 to 0.05626, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 67s 262ms/step - loss: 4.5471 - accuracy: 0.0488 - val_loss: 4.3449 - val_accuracy: 0.0563 Epoch 7/30 256/256 [==============================] - ETA: 0s - loss: 4.3945 - accuracy: 0.0578 Epoch 00007: val_accuracy did not improve from 0.05626 256/256 [==============================] - 66s 256ms/step - loss: 4.3945 - accuracy: 0.0578 - val_loss: 5.0274 - val_accuracy: 0.0201 Epoch 8/30 256/256 [==============================] - ETA: 0s - loss: 4.3024 - accuracy: 0.0614 Epoch 00008: val_accuracy did not improve from 0.05626 256/256 [==============================] - 65s 254ms/step - loss: 4.3024 - accuracy: 0.0614 - val_loss: 4.3285 - val_accuracy: 0.0563 Epoch 9/30 256/256 [==============================] - ETA: 0s - loss: 4.1915 - accuracy: 0.0719 Epoch 00009: val_accuracy did not improve from 0.05626 256/256 [==============================] - 65s 253ms/step - loss: 4.1915 - accuracy: 0.0719 - val_loss: 5.2332 - val_accuracy: 0.0215 Epoch 10/30 256/256 [==============================] - ETA: 0s - loss: 4.0758 - accuracy: 0.0802 Epoch 00010: val_accuracy improved from 0.05626 to 0.05724, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 67s 261ms/step - loss: 4.0758 - accuracy: 0.0802 - val_loss: 4.4703 - val_accuracy: 0.0572 Epoch 11/30 256/256 [==============================] - ETA: 0s - loss: 3.9748 - accuracy: 0.0927 Epoch 00011: val_accuracy improved from 0.05724 to 0.06115, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 67s 262ms/step - loss: 3.9748 - accuracy: 0.0927 - val_loss: 4.3545 - val_accuracy: 0.0612 Epoch 12/30 256/256 [==============================] - ETA: 0s - loss: 3.8711 - accuracy: 0.1080 Epoch 00012: val_accuracy improved from 0.06115 to 0.06654, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 68s 266ms/step - loss: 3.8711 - accuracy: 0.1080 - val_loss: 4.2889 - val_accuracy: 0.0665 Epoch 13/30 256/256 [==============================] - ETA: 0s - loss: 3.7943 - accuracy: 0.1230 Epoch 00013: val_accuracy improved from 0.06654 to 0.08121, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 67s 264ms/step - loss: 3.7943 - accuracy: 0.1230 - val_loss: 4.1503 - val_accuracy: 0.0812 Epoch 14/30 256/256 [==============================] - ETA: 0s - loss: 3.6804 - accuracy: 0.1312 Epoch 00014: val_accuracy improved from 0.08121 to 0.10274, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 68s 264ms/step - loss: 3.6804 - accuracy: 0.1312 - val_loss: 3.9772 - val_accuracy: 0.1027 Epoch 15/30 256/256 [==============================] - ETA: 0s - loss: 3.5704 - accuracy: 0.1478 Epoch 00015: val_accuracy did not improve from 0.10274 256/256 [==============================] - 67s 260ms/step - loss: 3.5704 - accuracy: 0.1478 - val_loss: 4.8330 - val_accuracy: 0.0455 Epoch 16/30 256/256 [==============================] - ETA: 0s - loss: 3.5232 - accuracy: 0.1504 Epoch 00016: val_accuracy did not improve from 0.10274 256/256 [==============================] - 66s 256ms/step - loss: 3.5232 - accuracy: 0.1504 - val_loss: 4.0965 - val_accuracy: 0.0895 Epoch 17/30 256/256 [==============================] - ETA: 0s - loss: 3.4165 - accuracy: 0.1728 Epoch 00017: val_accuracy did not improve from 0.10274 256/256 [==============================] - 66s 256ms/step - loss: 3.4165 - accuracy: 0.1728 - val_loss: 4.0845 - val_accuracy: 0.1018 Epoch 18/30 256/256 [==============================] - ETA: 0s - loss: 3.3254 - accuracy: 0.1936 Epoch 00018: val_accuracy improved from 0.10274 to 0.12573, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 67s 260ms/step - loss: 3.3254 - accuracy: 0.1936 - val_loss: 3.8839 - val_accuracy: 0.1257 Epoch 19/30 256/256 [==============================] - ETA: 0s - loss: 3.2539 - accuracy: 0.2075 Epoch 00019: val_accuracy did not improve from 0.12573 256/256 [==============================] - 65s 255ms/step - loss: 3.2539 - accuracy: 0.2075 - val_loss: 4.1550 - val_accuracy: 0.0939 Epoch 20/30 256/256 [==============================] - ETA: 0s - loss: 3.1620 - accuracy: 0.2229 Epoch 00020: val_accuracy improved from 0.12573 to 0.14384, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 67s 260ms/step - loss: 3.1620 - accuracy: 0.2229 - val_loss: 3.6858 - val_accuracy: 0.1438 Epoch 21/30 256/256 [==============================] - ETA: 0s - loss: 3.1088 - accuracy: 0.2230 Epoch 00021: val_accuracy did not improve from 0.14384 256/256 [==============================] - 65s 255ms/step - loss: 3.1088 - accuracy: 0.2230 - val_loss: 4.0912 - val_accuracy: 0.1071 Epoch 22/30 256/256 [==============================] - ETA: 0s - loss: 2.9965 - accuracy: 0.2504 Epoch 00022: val_accuracy did not improve from 0.14384 256/256 [==============================] - 65s 252ms/step - loss: 2.9965 - accuracy: 0.2504 - val_loss: 3.9099 - val_accuracy: 0.1306 Epoch 23/30 256/256 [==============================] - ETA: 0s - loss: 2.9023 - accuracy: 0.2611 Epoch 00023: val_accuracy did not improve from 0.14384 256/256 [==============================] - 65s 252ms/step - loss: 2.9023 - accuracy: 0.2611 - val_loss: 3.8056 - val_accuracy: 0.1370 Epoch 24/30 256/256 [==============================] - ETA: 0s - loss: 2.7893 - accuracy: 0.2899 Epoch 00024: val_accuracy did not improve from 0.14384 256/256 [==============================] - 65s 252ms/step - loss: 2.7893 - accuracy: 0.2899 - val_loss: 4.1174 - val_accuracy: 0.1204 Epoch 25/30 256/256 [==============================] - ETA: 0s - loss: 2.7343 - accuracy: 0.3045 Epoch 00025: val_accuracy improved from 0.14384 to 0.15215, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 66s 257ms/step - loss: 2.7343 - accuracy: 0.3045 - val_loss: 3.7305 - val_accuracy: 0.1522 Epoch 26/30 256/256 [==============================] - ETA: 0s - loss: 2.5960 - accuracy: 0.3215 Epoch 00026: val_accuracy did not improve from 0.15215 256/256 [==============================] - 65s 255ms/step - loss: 2.5960 - accuracy: 0.3215 - val_loss: 4.0021 - val_accuracy: 0.1355 Epoch 27/30 256/256 [==============================] - ETA: 0s - loss: 2.5637 - accuracy: 0.3310 Epoch 00027: val_accuracy did not improve from 0.15215 256/256 [==============================] - 64s 252ms/step - loss: 2.5637 - accuracy: 0.3310 - val_loss: 4.1204 - val_accuracy: 0.1086 Epoch 28/30 256/256 [==============================] - ETA: 0s - loss: 2.5387 - accuracy: 0.3473 Epoch 00028: val_accuracy did not improve from 0.15215 256/256 [==============================] - 64s 252ms/step - loss: 2.5387 - accuracy: 0.3473 - val_loss: 5.9031 - val_accuracy: 0.0641 Epoch 29/30 256/256 [==============================] - ETA: 0s - loss: 2.3768 - accuracy: 0.3688 Epoch 00029: val_accuracy did not improve from 0.15215 256/256 [==============================] - 65s 252ms/step - loss: 2.3768 - accuracy: 0.3688 - val_loss: 4.0731 - val_accuracy: 0.1424 Epoch 30/30 256/256 [==============================] - ETA: 0s - loss: 2.2749 - accuracy: 0.3963 Epoch 00030: val_accuracy improved from 0.15215 to 0.17221, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 66s 259ms/step - loss: 2.2749 - accuracy: 0.3963 - val_loss: 3.7095 - val_accuracy: 0.1722
<Figure size 432x288 with 0 Axes>
64/64 [==============================] - 11s 177ms/step - loss: 3.7095 - accuracy: 0.1722 Start FOLD Number 5 Found 8178 validated image filenames belonging to 120 classes. Found 2044 validated image filenames belonging to 120 classes. Epoch 1/30 2/256 [..............................] - ETA: 25s - loss: 6.4260 - accuracy: 0.0000e+00WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0777s vs `on_train_batch_end` time: 0.1244s). Check your callbacks. 256/256 [==============================] - ETA: 0s - loss: 5.6906 - accuracy: 0.0141 Epoch 00001: val_accuracy improved from -inf to 0.01125, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 67s 261ms/step - loss: 5.6906 - accuracy: 0.0141 - val_loss: 5.2340 - val_accuracy: 0.0113 Epoch 2/30 256/256 [==============================] - ETA: 0s - loss: 5.2591 - accuracy: 0.0187 Epoch 00002: val_accuracy improved from 0.01125 to 0.02299, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 68s 265ms/step - loss: 5.2591 - accuracy: 0.0187 - val_loss: 4.9036 - val_accuracy: 0.0230 Epoch 3/30 256/256 [==============================] - ETA: 0s - loss: 4.9902 - accuracy: 0.0269 Epoch 00003: val_accuracy did not improve from 0.02299 256/256 [==============================] - 66s 258ms/step - loss: 4.9902 - accuracy: 0.0269 - val_loss: 4.7188 - val_accuracy: 0.0215 Epoch 4/30 256/256 [==============================] - ETA: 0s - loss: 4.8329 - accuracy: 0.0342 Epoch 00004: val_accuracy improved from 0.02299 to 0.02838, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 67s 261ms/step - loss: 4.8329 - accuracy: 0.0342 - val_loss: 4.7960 - val_accuracy: 0.0284 Epoch 5/30 256/256 [==============================] - ETA: 0s - loss: 4.7077 - accuracy: 0.0374 Epoch 00005: val_accuracy improved from 0.02838 to 0.02886, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 68s 264ms/step - loss: 4.7077 - accuracy: 0.0374 - val_loss: 4.6954 - val_accuracy: 0.0289 Epoch 6/30 256/256 [==============================] - ETA: 0s - loss: 4.5531 - accuracy: 0.0440 Epoch 00006: val_accuracy improved from 0.02886 to 0.03963, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 68s 264ms/step - loss: 4.5531 - accuracy: 0.0440 - val_loss: 4.5658 - val_accuracy: 0.0396 Epoch 7/30 256/256 [==============================] - ETA: 0s - loss: 4.4375 - accuracy: 0.0516 Epoch 00007: val_accuracy improved from 0.03963 to 0.04990, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 67s 262ms/step - loss: 4.4375 - accuracy: 0.0516 - val_loss: 4.5683 - val_accuracy: 0.0499 Epoch 8/30 256/256 [==============================] - ETA: 0s - loss: 4.3008 - accuracy: 0.0616 Epoch 00008: val_accuracy did not improve from 0.04990 256/256 [==============================] - 66s 257ms/step - loss: 4.3008 - accuracy: 0.0616 - val_loss: 4.6815 - val_accuracy: 0.0426 Epoch 9/30 256/256 [==============================] - ETA: 0s - loss: 4.2141 - accuracy: 0.0680 Epoch 00009: val_accuracy did not improve from 0.04990 256/256 [==============================] - 65s 254ms/step - loss: 4.2141 - accuracy: 0.0680 - val_loss: 4.5295 - val_accuracy: 0.0470 Epoch 10/30 256/256 [==============================] - ETA: 0s - loss: 4.0853 - accuracy: 0.0796 Epoch 00010: val_accuracy improved from 0.04990 to 0.07290, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 67s 260ms/step - loss: 4.0853 - accuracy: 0.0796 - val_loss: 4.3388 - val_accuracy: 0.0729 Epoch 11/30 256/256 [==============================] - ETA: 0s - loss: 3.9896 - accuracy: 0.0927 Epoch 00011: val_accuracy did not improve from 0.07290 256/256 [==============================] - 66s 256ms/step - loss: 3.9896 - accuracy: 0.0927 - val_loss: 5.4183 - val_accuracy: 0.0294 Epoch 12/30 256/256 [==============================] - ETA: 0s - loss: 3.8990 - accuracy: 0.1030 Epoch 00012: val_accuracy did not improve from 0.07290 256/256 [==============================] - 65s 253ms/step - loss: 3.8990 - accuracy: 0.1030 - val_loss: 4.4936 - val_accuracy: 0.0568 Epoch 13/30 256/256 [==============================] - ETA: 0s - loss: 3.8181 - accuracy: 0.1114 Epoch 00013: val_accuracy did not improve from 0.07290 256/256 [==============================] - 65s 254ms/step - loss: 3.8181 - accuracy: 0.1114 - val_loss: 4.5406 - val_accuracy: 0.0621 Epoch 14/30 256/256 [==============================] - ETA: 0s - loss: 3.7296 - accuracy: 0.1197 Epoch 00014: val_accuracy improved from 0.07290 to 0.09785, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 67s 261ms/step - loss: 3.7296 - accuracy: 0.1197 - val_loss: 4.1382 - val_accuracy: 0.0978 Epoch 15/30 256/256 [==============================] - ETA: 0s - loss: 3.6559 - accuracy: 0.1365 Epoch 00015: val_accuracy did not improve from 0.09785 256/256 [==============================] - 65s 254ms/step - loss: 3.6559 - accuracy: 0.1365 - val_loss: 4.3121 - val_accuracy: 0.0793 Epoch 16/30 256/256 [==============================] - ETA: 0s - loss: 3.5516 - accuracy: 0.1503 Epoch 00016: val_accuracy improved from 0.09785 to 0.10225, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 66s 259ms/step - loss: 3.5516 - accuracy: 0.1503 - val_loss: 4.0566 - val_accuracy: 0.1023 Epoch 17/30 256/256 [==============================] - ETA: 0s - loss: 3.4632 - accuracy: 0.1672 Epoch 00017: val_accuracy did not improve from 0.10225 256/256 [==============================] - 65s 255ms/step - loss: 3.4632 - accuracy: 0.1672 - val_loss: 4.4755 - val_accuracy: 0.0832 Epoch 18/30 256/256 [==============================] - ETA: 0s - loss: 3.3669 - accuracy: 0.1857 Epoch 00018: val_accuracy did not improve from 0.10225 256/256 [==============================] - 65s 252ms/step - loss: 3.3669 - accuracy: 0.1857 - val_loss: 4.4581 - val_accuracy: 0.0837 Epoch 19/30 256/256 [==============================] - ETA: 0s - loss: 3.2927 - accuracy: 0.2015 Epoch 00019: val_accuracy did not improve from 0.10225 256/256 [==============================] - 64s 252ms/step - loss: 3.2927 - accuracy: 0.2015 - val_loss: 4.3313 - val_accuracy: 0.0920 Epoch 20/30 256/256 [==============================] - ETA: 0s - loss: 3.1911 - accuracy: 0.2092 Epoch 00020: val_accuracy improved from 0.10225 to 0.11595, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 67s 260ms/step - loss: 3.1911 - accuracy: 0.2092 - val_loss: 4.0200 - val_accuracy: 0.1159 Epoch 21/30 256/256 [==============================] - ETA: 0s - loss: 3.1023 - accuracy: 0.2185 Epoch 00021: val_accuracy improved from 0.11595 to 0.13992, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 67s 261ms/step - loss: 3.1023 - accuracy: 0.2185 - val_loss: 3.8306 - val_accuracy: 0.1399 Epoch 22/30 256/256 [==============================] - ETA: 0s - loss: 3.0336 - accuracy: 0.2362 Epoch 00022: val_accuracy did not improve from 0.13992 256/256 [==============================] - 65s 256ms/step - loss: 3.0336 - accuracy: 0.2362 - val_loss: 3.9571 - val_accuracy: 0.1316 Epoch 23/30 256/256 [==============================] - ETA: 0s - loss: 2.9465 - accuracy: 0.2552 Epoch 00023: val_accuracy did not improve from 0.13992 256/256 [==============================] - 65s 253ms/step - loss: 2.9465 - accuracy: 0.2552 - val_loss: 4.0012 - val_accuracy: 0.1350 Epoch 24/30 256/256 [==============================] - ETA: 0s - loss: 2.9001 - accuracy: 0.2652 Epoch 00024: val_accuracy did not improve from 0.13992 256/256 [==============================] - 65s 252ms/step - loss: 2.9001 - accuracy: 0.2652 - val_loss: 4.0245 - val_accuracy: 0.1355 Epoch 25/30 256/256 [==============================] - ETA: 0s - loss: 2.8248 - accuracy: 0.2838 Epoch 00025: val_accuracy did not improve from 0.13992 256/256 [==============================] - 65s 252ms/step - loss: 2.8248 - accuracy: 0.2838 - val_loss: 4.0543 - val_accuracy: 0.1341 Epoch 26/30 256/256 [==============================] - ETA: 0s - loss: 2.6884 - accuracy: 0.3063 Epoch 00026: val_accuracy improved from 0.13992 to 0.16096, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 66s 259ms/step - loss: 2.6884 - accuracy: 0.3063 - val_loss: 3.7528 - val_accuracy: 0.1610 Epoch 27/30 256/256 [==============================] - ETA: 0s - loss: 2.5679 - accuracy: 0.3294 Epoch 00027: val_accuracy did not improve from 0.16096 256/256 [==============================] - 65s 254ms/step - loss: 2.5679 - accuracy: 0.3294 - val_loss: 4.0551 - val_accuracy: 0.1341 Epoch 28/30 256/256 [==============================] - ETA: 0s - loss: 2.4817 - accuracy: 0.3436 Epoch 00028: val_accuracy did not improve from 0.16096 256/256 [==============================] - 64s 251ms/step - loss: 2.4817 - accuracy: 0.3436 - val_loss: 3.8781 - val_accuracy: 0.1575 Epoch 29/30 256/256 [==============================] - ETA: 0s - loss: 2.3809 - accuracy: 0.3644 Epoch 00029: val_accuracy did not improve from 0.16096 256/256 [==============================] - 64s 250ms/step - loss: 2.3809 - accuracy: 0.3644 - val_loss: 4.9563 - val_accuracy: 0.0793 Epoch 30/30 256/256 [==============================] - ETA: 0s - loss: 2.3474 - accuracy: 0.3755 Epoch 00030: val_accuracy improved from 0.16096 to 0.16634, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 66s 258ms/step - loss: 2.3474 - accuracy: 0.3755 - val_loss: 3.9642 - val_accuracy: 0.1663
<Figure size 432x288 with 0 Axes>
64/64 [==============================] - 12s 180ms/step - loss: 3.9642 - accuracy: 0.1663 3
<Figure size 432x288 with 0 Axes>
According to the latest results, the model values increased and the model became more accurate and less overfit.
lables_df = pd.read_csv("/content/drive/MyDrive/Data/labels.csv" , engine="python")
lables_df['id'] = lables_df['id'] + '.jpg'
lables_df.set_index('id' , inplace=True)
sub_lables_df = pd.read_csv(r'/content/drive/MyDrive/Data/sample_submission.csv' , engine="python")
lables = sub_lables_df.columns
test_dir = r'/content/drive/MyDrive/Data/test'
main_dir = r"/content/drive/MyDrive/Data"
sub_lables_df['id'] = sub_lables_df['id'] + '.jpg'
img_data_array = []
true_classes = []
pred_lables = []
dir_name = r'/content/drive/MyDrive/Data/train'
model = get_third_model()
model.load_weights(r'/content/drive/MyDrive/Data/Models/Model_3.h5')
num_true = 0
num_false = 0
for filename in os.listdir(dir_name):
if num_true + num_false == 40:
break
else:
image_path = os.path.join(dir_name , filename)
image=load_img( image_path)
pred = model.predict(np.expand_dims(np.array(image.resize((300,300))), axis=0))
class_num = np.argmax(pred,axis=1)
pred_lbl = lables[class_num][0]
image=image.resize((400, 400))
image=np.array(image)
image=image.astype('float32')
image /= 255
true_lbl = lables_df.loc[filename , 'breed']
if true_lbl == pred_lbl:
true_classes.append(true_lbl)
img_data_array.append(image)
pred_lables.append(pred_lbl)
num_true += 1
else:
if num_false < 25:
true_classes.append(true_lbl)
img_data_array.append(image)
pred_lables.append(pred_lbl)
num_false += 1
Display_Images(img_data_array,true_classes,10,4,(30,30),pred_lables)
test_datagen = ImageDataGenerator(rescale=1./255)
df=pd.DataFrame(sub_lables_df['id'])
test_generator = test_datagen.flow_from_dataframe(
dataframe=df,
directory=test_dir,
x_col="id",
y_col=None,
target_size=(300, 300),
color_mode="rgb",
batch_size=1,
seed=20,
class_mode=None,
shuffle=False
)
pred=model.predict_generator(test_generator, steps=len(test_generator), verbose=1)
pred_df = pd.DataFrame(pred)
df = pd.DataFrame()
df['id'] = sub_lables_df['id']
columns_names = ['id']
for i in range(120):
df[i+1] = pred_df[i]
col_name = lables[i+1]
columns_names.append(col_name)
df.columns = columns_names
print(df)
df.to_csv(main_dir+'/submission_model3.csv', index = False)
plt.figure(figsize = (20,2))
plt.imshow(mpimg.imread(r'C:\Users\shachar meretz\Desktop\pic.PNG'))
<matplotlib.image.AxesImage at 0x1a28b6dc2c8>
In addition to the two suggestions you applied, implement inference time augmentation and report the improvement in metrics you received.
We Will Change Input size to 100X100
To make the model fit for the new input dimensions, we will remove the last convolution layer with 2048 filters. We will do this to make the augmentation process more efficient and to generalize the images
def get_model_for_augmentation():
model = Sequential()
model.add(Conv2D(32, (3, 3),activation='relu', input_shape=(100,100,3)))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Conv2D(64, (3, 3),activation='relu'))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Conv2D(128, (3, 3),activation='relu'))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Conv2D(512, (3, 3),activation='relu'))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Conv2D(1024, (3, 3),activation='relu'))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(512, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(120, activation='softmax'))
model.summary()
return model
fit_model_5Fold(4,(100,100),True)
Start FOLD Number 1 Found 8177 validated image filenames belonging to 120 classes. Found 2045 validated image filenames belonging to 120 classes. Model: "sequential_6" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_35 (Conv2D) (None, 98, 98, 32) 896 _________________________________________________________________ batch_normalization_35 (Batc (None, 98, 98, 32) 128 _________________________________________________________________ max_pooling2d_31 (MaxPooling (None, 49, 49, 32) 0 _________________________________________________________________ dropout_35 (Dropout) (None, 49, 49, 32) 0 _________________________________________________________________ conv2d_36 (Conv2D) (None, 47, 47, 64) 18496 _________________________________________________________________ batch_normalization_36 (Batc (None, 47, 47, 64) 256 _________________________________________________________________ max_pooling2d_32 (MaxPooling (None, 23, 23, 64) 0 _________________________________________________________________ dropout_36 (Dropout) (None, 23, 23, 64) 0 _________________________________________________________________ conv2d_37 (Conv2D) (None, 21, 21, 128) 73856 _________________________________________________________________ batch_normalization_37 (Batc (None, 21, 21, 128) 512 _________________________________________________________________ max_pooling2d_33 (MaxPooling (None, 10, 10, 128) 0 _________________________________________________________________ dropout_37 (Dropout) (None, 10, 10, 128) 0 _________________________________________________________________ conv2d_38 (Conv2D) (None, 8, 8, 512) 590336 _________________________________________________________________ batch_normalization_38 (Batc (None, 8, 8, 512) 2048 _________________________________________________________________ max_pooling2d_34 (MaxPooling (None, 4, 4, 512) 0 _________________________________________________________________ dropout_38 (Dropout) (None, 4, 4, 512) 0 _________________________________________________________________ conv2d_39 (Conv2D) (None, 2, 2, 1024) 4719616 _________________________________________________________________ batch_normalization_39 (Batc (None, 2, 2, 1024) 4096 _________________________________________________________________ max_pooling2d_35 (MaxPooling (None, 1, 1, 1024) 0 _________________________________________________________________ dropout_39 (Dropout) (None, 1, 1, 1024) 0 _________________________________________________________________ flatten_2 (Flatten) (None, 1024) 0 _________________________________________________________________ dense_6 (Dense) (None, 1024) 1049600 _________________________________________________________________ batch_normalization_40 (Batc (None, 1024) 4096 _________________________________________________________________ dropout_40 (Dropout) (None, 1024) 0 _________________________________________________________________ dense_7 (Dense) (None, 512) 524800 _________________________________________________________________ batch_normalization_41 (Batc (None, 512) 2048 _________________________________________________________________ dropout_41 (Dropout) (None, 512) 0 _________________________________________________________________ dense_8 (Dense) (None, 120) 61560 ================================================================= Total params: 7,052,344 Trainable params: 7,045,752 Non-trainable params: 6,592 _________________________________________________________________ Model: "sequential_6" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_35 (Conv2D) (None, 98, 98, 32) 896 _________________________________________________________________ batch_normalization_35 (Batc (None, 98, 98, 32) 128 _________________________________________________________________ max_pooling2d_31 (MaxPooling (None, 49, 49, 32) 0 _________________________________________________________________ dropout_35 (Dropout) (None, 49, 49, 32) 0 _________________________________________________________________ conv2d_36 (Conv2D) (None, 47, 47, 64) 18496 _________________________________________________________________ batch_normalization_36 (Batc (None, 47, 47, 64) 256 _________________________________________________________________ max_pooling2d_32 (MaxPooling (None, 23, 23, 64) 0 _________________________________________________________________ dropout_36 (Dropout) (None, 23, 23, 64) 0 _________________________________________________________________ conv2d_37 (Conv2D) (None, 21, 21, 128) 73856 _________________________________________________________________ batch_normalization_37 (Batc (None, 21, 21, 128) 512 _________________________________________________________________ max_pooling2d_33 (MaxPooling (None, 10, 10, 128) 0 _________________________________________________________________ dropout_37 (Dropout) (None, 10, 10, 128) 0 _________________________________________________________________ conv2d_38 (Conv2D) (None, 8, 8, 512) 590336 _________________________________________________________________ batch_normalization_38 (Batc (None, 8, 8, 512) 2048 _________________________________________________________________ max_pooling2d_34 (MaxPooling (None, 4, 4, 512) 0 _________________________________________________________________ dropout_38 (Dropout) (None, 4, 4, 512) 0 _________________________________________________________________ conv2d_39 (Conv2D) (None, 2, 2, 1024) 4719616 _________________________________________________________________ batch_normalization_39 (Batc (None, 2, 2, 1024) 4096 _________________________________________________________________ max_pooling2d_35 (MaxPooling (None, 1, 1, 1024) 0 _________________________________________________________________ dropout_39 (Dropout) (None, 1, 1, 1024) 0 _________________________________________________________________ flatten_2 (Flatten) (None, 1024) 0 _________________________________________________________________ dense_6 (Dense) (None, 1024) 1049600 _________________________________________________________________ batch_normalization_40 (Batc (None, 1024) 4096 _________________________________________________________________ dropout_40 (Dropout) (None, 1024) 0 _________________________________________________________________ dense_7 (Dense) (None, 512) 524800 _________________________________________________________________ batch_normalization_41 (Batc (None, 512) 2048 _________________________________________________________________ dropout_41 (Dropout) (None, 512) 0 _________________________________________________________________ dense_8 (Dense) (None, 120) 61560 ================================================================= Total params: 7,052,344 Trainable params: 7,045,752 Non-trainable params: 6,592 _________________________________________________________________ Epoch 1/30 256/256 [==============================] - ETA: 0s - loss: 5.7655 - accuracy: 0.0097 Epoch 00001: val_accuracy improved from -inf to 0.00636, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 3314s 13s/step - loss: 5.7655 - accuracy: 0.0097 - val_loss: 8.3669 - val_accuracy: 0.0064 Epoch 2/30 256/256 [==============================] - ETA: 0s - loss: 5.3380 - accuracy: 0.0164 Epoch 00002: val_accuracy improved from 0.00636 to 0.01222, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 66s 258ms/step - loss: 5.3380 - accuracy: 0.0164 - val_loss: 5.5564 - val_accuracy: 0.0122 Epoch 3/30 256/256 [==============================] - ETA: 0s - loss: 5.1364 - accuracy: 0.0176 Epoch 00003: val_accuracy improved from 0.01222 to 0.01565, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 66s 259ms/step - loss: 5.1364 - accuracy: 0.0176 - val_loss: 5.1397 - val_accuracy: 0.0156 Epoch 4/30 256/256 [==============================] - ETA: 0s - loss: 4.9876 - accuracy: 0.0226 Epoch 00004: val_accuracy did not improve from 0.01565 256/256 [==============================] - 64s 250ms/step - loss: 4.9876 - accuracy: 0.0226 - val_loss: 5.3670 - val_accuracy: 0.0142 Epoch 5/30 256/256 [==============================] - ETA: 0s - loss: 4.8594 - accuracy: 0.0279 Epoch 00005: val_accuracy improved from 0.01565 to 0.01760, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 64s 248ms/step - loss: 4.8594 - accuracy: 0.0279 - val_loss: 5.1569 - val_accuracy: 0.0176 Epoch 6/30 256/256 [==============================] - ETA: 0s - loss: 4.7749 - accuracy: 0.0271 Epoch 00006: val_accuracy improved from 0.01760 to 0.01956, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 65s 252ms/step - loss: 4.7749 - accuracy: 0.0271 - val_loss: 5.2278 - val_accuracy: 0.0196 Epoch 7/30 256/256 [==============================] - ETA: 0s - loss: 4.6633 - accuracy: 0.0302 Epoch 00007: val_accuracy did not improve from 0.01956 256/256 [==============================] - 63s 247ms/step - loss: 4.6633 - accuracy: 0.0302 - val_loss: 5.4776 - val_accuracy: 0.0161 Epoch 8/30 256/256 [==============================] - ETA: 0s - loss: 4.6324 - accuracy: 0.0301 Epoch 00008: val_accuracy improved from 0.01956 to 0.02543, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 64s 251ms/step - loss: 4.6324 - accuracy: 0.0301 - val_loss: 4.7944 - val_accuracy: 0.0254 Epoch 9/30 256/256 [==============================] - ETA: 0s - loss: 4.5520 - accuracy: 0.0393 Epoch 00009: val_accuracy did not improve from 0.02543 256/256 [==============================] - 63s 248ms/step - loss: 4.5520 - accuracy: 0.0393 - val_loss: 5.1153 - val_accuracy: 0.0191 Epoch 10/30 256/256 [==============================] - ETA: 0s - loss: 4.4968 - accuracy: 0.0401 Epoch 00010: val_accuracy improved from 0.02543 to 0.03227, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 65s 253ms/step - loss: 4.4968 - accuracy: 0.0401 - val_loss: 4.7972 - val_accuracy: 0.0323 Epoch 11/30 256/256 [==============================] - ETA: 0s - loss: 4.4496 - accuracy: 0.0427 Epoch 00011: val_accuracy did not improve from 0.03227 256/256 [==============================] - 63s 248ms/step - loss: 4.4496 - accuracy: 0.0427 - val_loss: 4.9971 - val_accuracy: 0.0191 Epoch 12/30 256/256 [==============================] - ETA: 0s - loss: 4.4196 - accuracy: 0.0465 Epoch 00012: val_accuracy improved from 0.03227 to 0.03374, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 63s 246ms/step - loss: 4.4196 - accuracy: 0.0465 - val_loss: 4.6152 - val_accuracy: 0.0337 Epoch 13/30 256/256 [==============================] - ETA: 0s - loss: 4.3699 - accuracy: 0.0462 Epoch 00013: val_accuracy did not improve from 0.03374 256/256 [==============================] - 62s 242ms/step - loss: 4.3699 - accuracy: 0.0462 - val_loss: 4.7605 - val_accuracy: 0.0318 Epoch 14/30 256/256 [==============================] - ETA: 0s - loss: 4.3261 - accuracy: 0.0542 Epoch 00014: val_accuracy did not improve from 0.03374 256/256 [==============================] - 61s 239ms/step - loss: 4.3261 - accuracy: 0.0542 - val_loss: 5.0718 - val_accuracy: 0.0269 Epoch 15/30 256/256 [==============================] - ETA: 0s - loss: 4.2810 - accuracy: 0.0542 Epoch 00015: val_accuracy improved from 0.03374 to 0.03961, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 63s 248ms/step - loss: 4.2810 - accuracy: 0.0542 - val_loss: 4.5632 - val_accuracy: 0.0396 Epoch 16/30 256/256 [==============================] - ETA: 0s - loss: 4.2551 - accuracy: 0.0567 Epoch 00016: val_accuracy did not improve from 0.03961 256/256 [==============================] - 62s 242ms/step - loss: 4.2551 - accuracy: 0.0567 - val_loss: 4.6459 - val_accuracy: 0.0333 Epoch 17/30 256/256 [==============================] - ETA: 0s - loss: 4.2211 - accuracy: 0.0629 Epoch 00017: val_accuracy did not improve from 0.03961 256/256 [==============================] - 61s 240ms/step - loss: 4.2211 - accuracy: 0.0629 - val_loss: 4.8542 - val_accuracy: 0.0293 Epoch 18/30 256/256 [==============================] - ETA: 0s - loss: 4.1950 - accuracy: 0.0669 Epoch 00018: val_accuracy did not improve from 0.03961 256/256 [==============================] - 61s 239ms/step - loss: 4.1950 - accuracy: 0.0669 - val_loss: 4.8488 - val_accuracy: 0.0333 Epoch 19/30 256/256 [==============================] - ETA: 0s - loss: 4.1622 - accuracy: 0.0692 Epoch 00019: val_accuracy improved from 0.03961 to 0.05721, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 62s 242ms/step - loss: 4.1622 - accuracy: 0.0692 - val_loss: 4.3117 - val_accuracy: 0.0572 Epoch 20/30 256/256 [==============================] - ETA: 0s - loss: 4.1207 - accuracy: 0.0718 Epoch 00020: val_accuracy did not improve from 0.05721 256/256 [==============================] - 62s 241ms/step - loss: 4.1207 - accuracy: 0.0718 - val_loss: 4.5481 - val_accuracy: 0.0460 Epoch 21/30 256/256 [==============================] - ETA: 0s - loss: 4.0983 - accuracy: 0.0755 Epoch 00021: val_accuracy did not improve from 0.05721 256/256 [==============================] - 61s 237ms/step - loss: 4.0983 - accuracy: 0.0755 - val_loss: 4.7659 - val_accuracy: 0.0386 Epoch 22/30 256/256 [==============================] - ETA: 0s - loss: 4.0720 - accuracy: 0.0772 Epoch 00022: val_accuracy did not improve from 0.05721 256/256 [==============================] - 61s 238ms/step - loss: 4.0720 - accuracy: 0.0772 - val_loss: 4.5709 - val_accuracy: 0.0460 Epoch 23/30 256/256 [==============================] - ETA: 0s - loss: 4.0658 - accuracy: 0.0766 Epoch 00023: val_accuracy did not improve from 0.05721 256/256 [==============================] - 61s 239ms/step - loss: 4.0658 - accuracy: 0.0766 - val_loss: 4.5041 - val_accuracy: 0.0435 Epoch 24/30 256/256 [==============================] - ETA: 0s - loss: 4.0141 - accuracy: 0.0857 Epoch 00024: val_accuracy did not improve from 0.05721 256/256 [==============================] - 61s 240ms/step - loss: 4.0141 - accuracy: 0.0857 - val_loss: 4.8603 - val_accuracy: 0.0279 Epoch 25/30 256/256 [==============================] - ETA: 0s - loss: 4.0032 - accuracy: 0.0860 Epoch 00025: val_accuracy did not improve from 0.05721 256/256 [==============================] - 62s 241ms/step - loss: 4.0032 - accuracy: 0.0860 - val_loss: 4.4329 - val_accuracy: 0.0553 Epoch 26/30 256/256 [==============================] - ETA: 0s - loss: 3.9839 - accuracy: 0.0876 Epoch 00026: val_accuracy did not improve from 0.05721 256/256 [==============================] - 61s 240ms/step - loss: 3.9839 - accuracy: 0.0876 - val_loss: 4.3234 - val_accuracy: 0.0572 Epoch 27/30 256/256 [==============================] - ETA: 0s - loss: 3.9624 - accuracy: 0.0937 Epoch 00027: val_accuracy improved from 0.05721 to 0.05917, saving model to /content/drive/MyDrive/Data/Models/Model_1.h5 256/256 [==============================] - 62s 243ms/step - loss: 3.9624 - accuracy: 0.0937 - val_loss: 4.2883 - val_accuracy: 0.0592 Epoch 28/30 256/256 [==============================] - ETA: 0s - loss: 3.9519 - accuracy: 0.0894 Epoch 00028: val_accuracy did not improve from 0.05917 256/256 [==============================] - 62s 241ms/step - loss: 3.9519 - accuracy: 0.0894 - val_loss: 4.8260 - val_accuracy: 0.0323 Epoch 29/30 256/256 [==============================] - ETA: 0s - loss: 3.9305 - accuracy: 0.0944 Epoch 00029: val_accuracy did not improve from 0.05917 256/256 [==============================] - 61s 240ms/step - loss: 3.9305 - accuracy: 0.0944 - val_loss: 4.8090 - val_accuracy: 0.0352 Epoch 30/30 256/256 [==============================] - ETA: 0s - loss: 3.8998 - accuracy: 0.1009 Epoch 00030: val_accuracy did not improve from 0.05917 256/256 [==============================] - 61s 240ms/step - loss: 3.8998 - accuracy: 0.1009 - val_loss: 4.9509 - val_accuracy: 0.0308
64/64 [==============================] - 12s 180ms/step - loss: 4.2963 - accuracy: 0.0606 Start FOLD Number 2 Found 8177 validated image filenames belonging to 120 classes. Found 2045 validated image filenames belonging to 120 classes. Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 98, 98, 32) 896 _________________________________________________________________ batch_normalization (BatchNo (None, 98, 98, 32) 128 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 49, 49, 32) 0 _________________________________________________________________ dropout (Dropout) (None, 49, 49, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 47, 47, 64) 18496 _________________________________________________________________ batch_normalization_1 (Batch (None, 47, 47, 64) 256 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 23, 23, 64) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 23, 23, 64) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 21, 21, 128) 73856 _________________________________________________________________ batch_normalization_2 (Batch (None, 21, 21, 128) 512 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 10, 10, 128) 0 _________________________________________________________________ dropout_2 (Dropout) (None, 10, 10, 128) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 8, 8, 512) 590336 _________________________________________________________________ batch_normalization_3 (Batch (None, 8, 8, 512) 2048 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 4, 4, 512) 0 _________________________________________________________________ dropout_3 (Dropout) (None, 4, 4, 512) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 2, 2, 1024) 4719616 _________________________________________________________________ batch_normalization_4 (Batch (None, 2, 2, 1024) 4096 _________________________________________________________________ max_pooling2d_4 (MaxPooling2 (None, 1, 1, 1024) 0 _________________________________________________________________ dropout_4 (Dropout) (None, 1, 1, 1024) 0 _________________________________________________________________ flatten (Flatten) (None, 1024) 0 _________________________________________________________________ dense (Dense) (None, 1024) 1049600 _________________________________________________________________ batch_normalization_5 (Batch (None, 1024) 4096 _________________________________________________________________ dropout_5 (Dropout) (None, 1024) 0 _________________________________________________________________ dense_1 (Dense) (None, 512) 524800 _________________________________________________________________ batch_normalization_6 (Batch (None, 512) 2048 _________________________________________________________________ dropout_6 (Dropout) (None, 512) 0 _________________________________________________________________ dense_2 (Dense) (None, 120) 61560 ================================================================= Total params: 7,052,344 Trainable params: 7,045,752 Non-trainable params: 6,592 _________________________________________________________________ Epoch 1/30 256/256 [==============================] - ETA: 0s - loss: 5.7610 - accuracy: 0.0131 Epoch 00001: val_accuracy improved from -inf to 0.00636, saving model to /content/drive/MyDrive/Data/Models/Model_2.h5 256/256 [==============================] - 66s 257ms/step - loss: 5.7610 - accuracy: 0.0131 - val_loss: 11.5271 - val_accuracy: 0.0064 Epoch 2/30 256/256 [==============================] - ETA: 0s - loss: 5.3442 - accuracy: 0.0155 Epoch 00002: val_accuracy did not improve from 0.00636 256/256 [==============================] - 62s 241ms/step - loss: 5.3442 - accuracy: 0.0155 - val_loss: 9.9290 - val_accuracy: 0.0064 Epoch 3/30 256/256 [==============================] - ETA: 0s - loss: 5.1419 - accuracy: 0.0220 Epoch 00003: val_accuracy improved from 0.00636 to 0.01956, saving model to /content/drive/MyDrive/Data/Models/Model_2.h5 256/256 [==============================] - 62s 242ms/step - loss: 5.1419 - accuracy: 0.0220 - val_loss: 9.3507 - val_accuracy: 0.0196 Epoch 4/30 256/256 [==============================] - ETA: 0s - loss: 4.9962 - accuracy: 0.0243 Epoch 00004: val_accuracy did not improve from 0.01956 256/256 [==============================] - 61s 239ms/step - loss: 4.9962 - accuracy: 0.0243 - val_loss: 7.1266 - val_accuracy: 0.0152 Epoch 5/30 256/256 [==============================] - ETA: 0s - loss: 4.8735 - accuracy: 0.0219 Epoch 00005: val_accuracy did not improve from 0.01956 256/256 [==============================] - 62s 242ms/step - loss: 4.8735 - accuracy: 0.0219 - val_loss: 9.5069 - val_accuracy: 0.0112 Epoch 6/30 256/256 [==============================] - ETA: 0s - loss: 4.7901 - accuracy: 0.0264 Epoch 00006: val_accuracy did not improve from 0.01956 256/256 [==============================] - 62s 240ms/step - loss: 4.7901 - accuracy: 0.0264 - val_loss: 6.7282 - val_accuracy: 0.0108 Epoch 7/30 256/256 [==============================] - ETA: 0s - loss: 4.6942 - accuracy: 0.0296 Epoch 00007: val_accuracy did not improve from 0.01956 256/256 [==============================] - 61s 239ms/step - loss: 4.6942 - accuracy: 0.0296 - val_loss: 6.6327 - val_accuracy: 0.0108 Epoch 8/30 256/256 [==============================] - ETA: 0s - loss: 4.6273 - accuracy: 0.0323 Epoch 00008: val_accuracy improved from 0.01956 to 0.02934, saving model to /content/drive/MyDrive/Data/Models/Model_2.h5 256/256 [==============================] - 62s 241ms/step - loss: 4.6273 - accuracy: 0.0323 - val_loss: 5.2885 - val_accuracy: 0.0293 Epoch 9/30 256/256 [==============================] - ETA: 0s - loss: 4.5656 - accuracy: 0.0353 Epoch 00009: val_accuracy improved from 0.02934 to 0.03912, saving model to /content/drive/MyDrive/Data/Models/Model_2.h5 256/256 [==============================] - 62s 243ms/step - loss: 4.5656 - accuracy: 0.0353 - val_loss: 4.8142 - val_accuracy: 0.0391 Epoch 10/30 256/256 [==============================] - ETA: 0s - loss: 4.5183 - accuracy: 0.0394 Epoch 00010: val_accuracy did not improve from 0.03912 256/256 [==============================] - 62s 241ms/step - loss: 4.5183 - accuracy: 0.0394 - val_loss: 4.8929 - val_accuracy: 0.0298 Epoch 11/30 256/256 [==============================] - ETA: 0s - loss: 4.4473 - accuracy: 0.0408 Epoch 00011: val_accuracy did not improve from 0.03912 256/256 [==============================] - 61s 237ms/step - loss: 4.4473 - accuracy: 0.0408 - val_loss: 5.1002 - val_accuracy: 0.0264 Epoch 12/30 256/256 [==============================] - ETA: 0s - loss: 4.4156 - accuracy: 0.0427 Epoch 00012: val_accuracy did not improve from 0.03912 256/256 [==============================] - 61s 239ms/step - loss: 4.4156 - accuracy: 0.0427 - val_loss: 5.0065 - val_accuracy: 0.0244 Epoch 13/30 256/256 [==============================] - ETA: 0s - loss: 4.3694 - accuracy: 0.0449 Epoch 00013: val_accuracy did not improve from 0.03912 256/256 [==============================] - 61s 238ms/step - loss: 4.3694 - accuracy: 0.0449 - val_loss: 5.2680 - val_accuracy: 0.0249 Epoch 14/30 256/256 [==============================] - ETA: 0s - loss: 4.3285 - accuracy: 0.0475 Epoch 00014: val_accuracy did not improve from 0.03912 256/256 [==============================] - 61s 239ms/step - loss: 4.3285 - accuracy: 0.0475 - val_loss: 4.7444 - val_accuracy: 0.0308 Epoch 15/30 256/256 [==============================] - ETA: 0s - loss: 4.2950 - accuracy: 0.0534 Epoch 00015: val_accuracy did not improve from 0.03912 256/256 [==============================] - 61s 240ms/step - loss: 4.2950 - accuracy: 0.0534 - val_loss: 4.9682 - val_accuracy: 0.0225 Epoch 16/30 256/256 [==============================] - ETA: 0s - loss: 4.2528 - accuracy: 0.0570 Epoch 00016: val_accuracy improved from 0.03912 to 0.04792, saving model to /content/drive/MyDrive/Data/Models/Model_2.h5 256/256 [==============================] - 62s 241ms/step - loss: 4.2528 - accuracy: 0.0570 - val_loss: 4.5180 - val_accuracy: 0.0479 Epoch 17/30 256/256 [==============================] - ETA: 0s - loss: 4.2332 - accuracy: 0.0667 Epoch 00017: val_accuracy improved from 0.04792 to 0.05770, saving model to /content/drive/MyDrive/Data/Models/Model_2.h5 256/256 [==============================] - 62s 244ms/step - loss: 4.2332 - accuracy: 0.0667 - val_loss: 4.3826 - val_accuracy: 0.0577 Epoch 18/30 256/256 [==============================] - ETA: 0s - loss: 4.1943 - accuracy: 0.0624 Epoch 00018: val_accuracy did not improve from 0.05770 256/256 [==============================] - 61s 239ms/step - loss: 4.1943 - accuracy: 0.0624 - val_loss: 4.4735 - val_accuracy: 0.0450 Epoch 19/30 256/256 [==============================] - ETA: 0s - loss: 4.1703 - accuracy: 0.0632 Epoch 00019: val_accuracy did not improve from 0.05770 256/256 [==============================] - 61s 238ms/step - loss: 4.1703 - accuracy: 0.0632 - val_loss: 4.6063 - val_accuracy: 0.0450 Epoch 20/30 256/256 [==============================] - ETA: 0s - loss: 4.1531 - accuracy: 0.0700 Epoch 00020: val_accuracy did not improve from 0.05770 256/256 [==============================] - 61s 240ms/step - loss: 4.1531 - accuracy: 0.0700 - val_loss: 4.6913 - val_accuracy: 0.0352 Epoch 21/30 256/256 [==============================] - ETA: 0s - loss: 4.1228 - accuracy: 0.0693 Epoch 00021: val_accuracy did not improve from 0.05770 256/256 [==============================] - 61s 238ms/step - loss: 4.1228 - accuracy: 0.0693 - val_loss: 4.5530 - val_accuracy: 0.0460 Epoch 22/30 256/256 [==============================] - ETA: 0s - loss: 4.0822 - accuracy: 0.0751 Epoch 00022: val_accuracy improved from 0.05770 to 0.06308, saving model to /content/drive/MyDrive/Data/Models/Model_2.h5 256/256 [==============================] - 62s 242ms/step - loss: 4.0822 - accuracy: 0.0751 - val_loss: 4.2645 - val_accuracy: 0.0631 Epoch 23/30 256/256 [==============================] - ETA: 0s - loss: 4.0733 - accuracy: 0.0812 Epoch 00023: val_accuracy did not improve from 0.06308 256/256 [==============================] - 61s 239ms/step - loss: 4.0733 - accuracy: 0.0812 - val_loss: 4.2474 - val_accuracy: 0.0601 Epoch 24/30 256/256 [==============================] - ETA: 0s - loss: 4.0468 - accuracy: 0.0769 Epoch 00024: val_accuracy did not improve from 0.06308 256/256 [==============================] - 61s 238ms/step - loss: 4.0468 - accuracy: 0.0769 - val_loss: 4.4583 - val_accuracy: 0.0460 Epoch 25/30 256/256 [==============================] - ETA: 0s - loss: 4.0230 - accuracy: 0.0874 Epoch 00025: val_accuracy did not improve from 0.06308 256/256 [==============================] - 62s 242ms/step - loss: 4.0230 - accuracy: 0.0874 - val_loss: 4.6105 - val_accuracy: 0.0372 Epoch 26/30 256/256 [==============================] - ETA: 0s - loss: 4.0046 - accuracy: 0.0911 Epoch 00026: val_accuracy improved from 0.06308 to 0.07628, saving model to /content/drive/MyDrive/Data/Models/Model_2.h5 256/256 [==============================] - 62s 243ms/step - loss: 4.0046 - accuracy: 0.0911 - val_loss: 4.1098 - val_accuracy: 0.0763 Epoch 27/30 256/256 [==============================] - ETA: 0s - loss: 3.9888 - accuracy: 0.0871 Epoch 00027: val_accuracy did not improve from 0.07628 256/256 [==============================] - 62s 242ms/step - loss: 3.9888 - accuracy: 0.0871 - val_loss: 4.3443 - val_accuracy: 0.0694 Epoch 28/30 256/256 [==============================] - ETA: 0s - loss: 3.9567 - accuracy: 0.0927 Epoch 00028: val_accuracy did not improve from 0.07628 256/256 [==============================] - 61s 240ms/step - loss: 3.9567 - accuracy: 0.0927 - val_loss: 4.2301 - val_accuracy: 0.0724 Epoch 29/30 256/256 [==============================] - ETA: 0s - loss: 3.9424 - accuracy: 0.0980 Epoch 00029: val_accuracy did not improve from 0.07628 256/256 [==============================] - 62s 240ms/step - loss: 3.9424 - accuracy: 0.0980 - val_loss: 4.3648 - val_accuracy: 0.0675 Epoch 30/30 256/256 [==============================] - ETA: 0s - loss: 3.9081 - accuracy: 0.0995 Epoch 00030: val_accuracy did not improve from 0.07628 256/256 [==============================] - 62s 243ms/step - loss: 3.9081 - accuracy: 0.0995 - val_loss: 4.3163 - val_accuracy: 0.0660
<Figure size 432x288 with 0 Axes>
64/64 [==============================] - 11s 179ms/step - loss: 4.1149 - accuracy: 0.0802 Start FOLD Number 3 Found 8178 validated image filenames belonging to 120 classes. Found 2044 validated image filenames belonging to 120 classes. Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 98, 98, 32) 896 _________________________________________________________________ batch_normalization (BatchNo (None, 98, 98, 32) 128 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 49, 49, 32) 0 _________________________________________________________________ dropout (Dropout) (None, 49, 49, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 47, 47, 64) 18496 _________________________________________________________________ batch_normalization_1 (Batch (None, 47, 47, 64) 256 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 23, 23, 64) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 23, 23, 64) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 21, 21, 128) 73856 _________________________________________________________________ batch_normalization_2 (Batch (None, 21, 21, 128) 512 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 10, 10, 128) 0 _________________________________________________________________ dropout_2 (Dropout) (None, 10, 10, 128) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 8, 8, 512) 590336 _________________________________________________________________ batch_normalization_3 (Batch (None, 8, 8, 512) 2048 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 4, 4, 512) 0 _________________________________________________________________ dropout_3 (Dropout) (None, 4, 4, 512) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 2, 2, 1024) 4719616 _________________________________________________________________ batch_normalization_4 (Batch (None, 2, 2, 1024) 4096 _________________________________________________________________ max_pooling2d_4 (MaxPooling2 (None, 1, 1, 1024) 0 _________________________________________________________________ dropout_4 (Dropout) (None, 1, 1, 1024) 0 _________________________________________________________________ flatten (Flatten) (None, 1024) 0 _________________________________________________________________ dense (Dense) (None, 1024) 1049600 _________________________________________________________________ batch_normalization_5 (Batch (None, 1024) 4096 _________________________________________________________________ dropout_5 (Dropout) (None, 1024) 0 _________________________________________________________________ dense_1 (Dense) (None, 512) 524800 _________________________________________________________________ batch_normalization_6 (Batch (None, 512) 2048 _________________________________________________________________ dropout_6 (Dropout) (None, 512) 0 _________________________________________________________________ dense_2 (Dense) (None, 120) 61560 ================================================================= Total params: 7,052,344 Trainable params: 7,045,752 Non-trainable params: 6,592 _________________________________________________________________ Epoch 1/30 256/256 [==============================] - ETA: 0s - loss: 5.7557 - accuracy: 0.0112 Epoch 00001: val_accuracy improved from -inf to 0.00978, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 67s 262ms/step - loss: 5.7557 - accuracy: 0.0112 - val_loss: 14.0425 - val_accuracy: 0.0098 Epoch 2/30 256/256 [==============================] - ETA: 0s - loss: 5.3343 - accuracy: 0.0160 Epoch 00002: val_accuracy improved from 0.00978 to 0.01712, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 62s 244ms/step - loss: 5.3343 - accuracy: 0.0160 - val_loss: 8.0759 - val_accuracy: 0.0171 Epoch 3/30 256/256 [==============================] - ETA: 0s - loss: 5.1443 - accuracy: 0.0192 Epoch 00003: val_accuracy improved from 0.01712 to 0.02055, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 62s 243ms/step - loss: 5.1443 - accuracy: 0.0192 - val_loss: 6.7860 - val_accuracy: 0.0205 Epoch 4/30 256/256 [==============================] - ETA: 0s - loss: 4.9965 - accuracy: 0.0209 Epoch 00004: val_accuracy did not improve from 0.02055 256/256 [==============================] - 61s 239ms/step - loss: 4.9965 - accuracy: 0.0209 - val_loss: 7.8218 - val_accuracy: 0.0088 Epoch 5/30 256/256 [==============================] - ETA: 0s - loss: 4.8837 - accuracy: 0.0245 Epoch 00005: val_accuracy did not improve from 0.02055 256/256 [==============================] - 62s 241ms/step - loss: 4.8837 - accuracy: 0.0245 - val_loss: 7.0815 - val_accuracy: 0.0073 Epoch 6/30 256/256 [==============================] - ETA: 0s - loss: 4.7720 - accuracy: 0.0278 Epoch 00006: val_accuracy did not improve from 0.02055 256/256 [==============================] - 61s 239ms/step - loss: 4.7720 - accuracy: 0.0278 - val_loss: 7.2857 - val_accuracy: 0.0122 Epoch 7/30 256/256 [==============================] - ETA: 0s - loss: 4.7184 - accuracy: 0.0317 Epoch 00007: val_accuracy improved from 0.02055 to 0.02104, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 62s 240ms/step - loss: 4.7184 - accuracy: 0.0317 - val_loss: 5.7271 - val_accuracy: 0.0210 Epoch 8/30 256/256 [==============================] - ETA: 0s - loss: 4.6241 - accuracy: 0.0361 Epoch 00008: val_accuracy improved from 0.02104 to 0.02397, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 62s 242ms/step - loss: 4.6241 - accuracy: 0.0361 - val_loss: 5.2933 - val_accuracy: 0.0240 Epoch 9/30 256/256 [==============================] - ETA: 0s - loss: 4.5558 - accuracy: 0.0362 Epoch 00009: val_accuracy improved from 0.02397 to 0.02593, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 62s 242ms/step - loss: 4.5558 - accuracy: 0.0362 - val_loss: 4.8902 - val_accuracy: 0.0259 Epoch 10/30 256/256 [==============================] - ETA: 0s - loss: 4.5172 - accuracy: 0.0390 Epoch 00010: val_accuracy improved from 0.02593 to 0.03131, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 63s 244ms/step - loss: 4.5172 - accuracy: 0.0390 - val_loss: 4.8492 - val_accuracy: 0.0313 Epoch 11/30 256/256 [==============================] - ETA: 0s - loss: 4.4615 - accuracy: 0.0418 Epoch 00011: val_accuracy did not improve from 0.03131 256/256 [==============================] - 61s 239ms/step - loss: 4.4615 - accuracy: 0.0418 - val_loss: 5.0261 - val_accuracy: 0.0191 Epoch 12/30 256/256 [==============================] - ETA: 0s - loss: 4.4398 - accuracy: 0.0426 Epoch 00012: val_accuracy did not improve from 0.03131 256/256 [==============================] - 61s 238ms/step - loss: 4.4398 - accuracy: 0.0426 - val_loss: 4.9550 - val_accuracy: 0.0230 Epoch 13/30 256/256 [==============================] - ETA: 0s - loss: 4.3879 - accuracy: 0.0485 Epoch 00013: val_accuracy improved from 0.03131 to 0.03327, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 62s 240ms/step - loss: 4.3879 - accuracy: 0.0485 - val_loss: 4.7798 - val_accuracy: 0.0333 Epoch 14/30 256/256 [==============================] - ETA: 0s - loss: 4.3435 - accuracy: 0.0482 Epoch 00014: val_accuracy improved from 0.03327 to 0.04012, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 62s 243ms/step - loss: 4.3435 - accuracy: 0.0482 - val_loss: 4.6966 - val_accuracy: 0.0401 Epoch 15/30 256/256 [==============================] - ETA: 0s - loss: 4.3040 - accuracy: 0.0545 Epoch 00015: val_accuracy did not improve from 0.04012 256/256 [==============================] - 62s 242ms/step - loss: 4.3040 - accuracy: 0.0545 - val_loss: 4.7610 - val_accuracy: 0.0274 Epoch 16/30 256/256 [==============================] - ETA: 0s - loss: 4.2809 - accuracy: 0.0560 Epoch 00016: val_accuracy improved from 0.04012 to 0.04354, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 62s 242ms/step - loss: 4.2809 - accuracy: 0.0560 - val_loss: 4.6377 - val_accuracy: 0.0435 Epoch 17/30 256/256 [==============================] - ETA: 0s - loss: 4.2379 - accuracy: 0.0620 Epoch 00017: val_accuracy improved from 0.04354 to 0.05088, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 62s 243ms/step - loss: 4.2379 - accuracy: 0.0620 - val_loss: 4.4249 - val_accuracy: 0.0509 Epoch 18/30 256/256 [==============================] - ETA: 0s - loss: 4.2125 - accuracy: 0.0642 Epoch 00018: val_accuracy did not improve from 0.05088 256/256 [==============================] - 62s 241ms/step - loss: 4.2125 - accuracy: 0.0642 - val_loss: 4.6082 - val_accuracy: 0.0367 Epoch 19/30 256/256 [==============================] - ETA: 0s - loss: 4.1752 - accuracy: 0.0614 Epoch 00019: val_accuracy did not improve from 0.05088 256/256 [==============================] - 61s 240ms/step - loss: 4.1752 - accuracy: 0.0614 - val_loss: 4.6526 - val_accuracy: 0.0435 Epoch 20/30 256/256 [==============================] - ETA: 0s - loss: 4.1442 - accuracy: 0.0720 Epoch 00020: val_accuracy improved from 0.05088 to 0.06115, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 63s 246ms/step - loss: 4.1442 - accuracy: 0.0720 - val_loss: 4.2742 - val_accuracy: 0.0612 Epoch 21/30 256/256 [==============================] - ETA: 0s - loss: 4.1273 - accuracy: 0.0687 Epoch 00021: val_accuracy did not improve from 0.06115 256/256 [==============================] - 62s 243ms/step - loss: 4.1273 - accuracy: 0.0687 - val_loss: 4.6252 - val_accuracy: 0.0377 Epoch 22/30 256/256 [==============================] - ETA: 0s - loss: 4.0926 - accuracy: 0.0741 Epoch 00022: val_accuracy did not improve from 0.06115 256/256 [==============================] - 61s 239ms/step - loss: 4.0926 - accuracy: 0.0741 - val_loss: 4.4117 - val_accuracy: 0.0519 Epoch 23/30 256/256 [==============================] - ETA: 0s - loss: 4.0532 - accuracy: 0.0802 Epoch 00023: val_accuracy did not improve from 0.06115 256/256 [==============================] - 61s 238ms/step - loss: 4.0532 - accuracy: 0.0802 - val_loss: 4.3731 - val_accuracy: 0.0523 Epoch 24/30 256/256 [==============================] - ETA: 0s - loss: 4.0348 - accuracy: 0.0829 Epoch 00024: val_accuracy did not improve from 0.06115 256/256 [==============================] - 60s 236ms/step - loss: 4.0348 - accuracy: 0.0829 - val_loss: 4.5756 - val_accuracy: 0.0475 Epoch 25/30 256/256 [==============================] - ETA: 0s - loss: 4.0135 - accuracy: 0.0809 Epoch 00025: val_accuracy did not improve from 0.06115 256/256 [==============================] - 61s 239ms/step - loss: 4.0135 - accuracy: 0.0809 - val_loss: 4.4668 - val_accuracy: 0.0602 Epoch 26/30 256/256 [==============================] - ETA: 0s - loss: 3.9895 - accuracy: 0.0913 Epoch 00026: val_accuracy improved from 0.06115 to 0.06507, saving model to /content/drive/MyDrive/Data/Models/Model_3.h5 256/256 [==============================] - 61s 239ms/step - loss: 3.9895 - accuracy: 0.0913 - val_loss: 4.2625 - val_accuracy: 0.0651 Epoch 27/30 256/256 [==============================] - ETA: 0s - loss: 3.9819 - accuracy: 0.0873 Epoch 00027: val_accuracy did not improve from 0.06507 256/256 [==============================] - 60s 236ms/step - loss: 3.9819 - accuracy: 0.0873 - val_loss: 5.4714 - val_accuracy: 0.0186 Epoch 28/30 256/256 [==============================] - ETA: 0s - loss: 3.9297 - accuracy: 0.1001 Epoch 00028: val_accuracy did not improve from 0.06507 256/256 [==============================] - 60s 236ms/step - loss: 3.9297 - accuracy: 0.1001 - val_loss: 4.3380 - val_accuracy: 0.0553 Epoch 29/30 256/256 [==============================] - ETA: 0s - loss: 3.9132 - accuracy: 0.1036 Epoch 00029: val_accuracy did not improve from 0.06507 256/256 [==============================] - 60s 236ms/step - loss: 3.9132 - accuracy: 0.1036 - val_loss: 4.4724 - val_accuracy: 0.0558 Epoch 30/30 256/256 [==============================] - ETA: 0s - loss: 3.8998 - accuracy: 0.1011 Epoch 00030: val_accuracy did not improve from 0.06507 256/256 [==============================] - 61s 237ms/step - loss: 3.8998 - accuracy: 0.1011 - val_loss: 4.3501 - val_accuracy: 0.0597
<Figure size 432x288 with 0 Axes>
64/64 [==============================] - 11s 178ms/step - loss: 4.2650 - accuracy: 0.0626 Start FOLD Number 4 Found 8178 validated image filenames belonging to 120 classes. Found 2044 validated image filenames belonging to 120 classes. Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 98, 98, 32) 896 _________________________________________________________________ batch_normalization (BatchNo (None, 98, 98, 32) 128 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 49, 49, 32) 0 _________________________________________________________________ dropout (Dropout) (None, 49, 49, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 47, 47, 64) 18496 _________________________________________________________________ batch_normalization_1 (Batch (None, 47, 47, 64) 256 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 23, 23, 64) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 23, 23, 64) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 21, 21, 128) 73856 _________________________________________________________________ batch_normalization_2 (Batch (None, 21, 21, 128) 512 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 10, 10, 128) 0 _________________________________________________________________ dropout_2 (Dropout) (None, 10, 10, 128) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 8, 8, 512) 590336 _________________________________________________________________ batch_normalization_3 (Batch (None, 8, 8, 512) 2048 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 4, 4, 512) 0 _________________________________________________________________ dropout_3 (Dropout) (None, 4, 4, 512) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 2, 2, 1024) 4719616 _________________________________________________________________ batch_normalization_4 (Batch (None, 2, 2, 1024) 4096 _________________________________________________________________ max_pooling2d_4 (MaxPooling2 (None, 1, 1, 1024) 0 _________________________________________________________________ dropout_4 (Dropout) (None, 1, 1, 1024) 0 _________________________________________________________________ flatten (Flatten) (None, 1024) 0 _________________________________________________________________ dense (Dense) (None, 1024) 1049600 _________________________________________________________________ batch_normalization_5 (Batch (None, 1024) 4096 _________________________________________________________________ dropout_5 (Dropout) (None, 1024) 0 _________________________________________________________________ dense_1 (Dense) (None, 512) 524800 _________________________________________________________________ batch_normalization_6 (Batch (None, 512) 2048 _________________________________________________________________ dropout_6 (Dropout) (None, 512) 0 _________________________________________________________________ dense_2 (Dense) (None, 120) 61560 ================================================================= Total params: 7,052,344 Trainable params: 7,045,752 Non-trainable params: 6,592 _________________________________________________________________ Epoch 1/30 256/256 [==============================] - ETA: 0s - loss: 5.7804 - accuracy: 0.0122 Epoch 00001: val_accuracy improved from -inf to 0.00636, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 65s 255ms/step - loss: 5.7804 - accuracy: 0.0122 - val_loss: 9.2063 - val_accuracy: 0.0064 Epoch 2/30 256/256 [==============================] - ETA: 0s - loss: 5.3471 - accuracy: 0.0155 Epoch 00002: val_accuracy did not improve from 0.00636 256/256 [==============================] - 60s 236ms/step - loss: 5.3471 - accuracy: 0.0155 - val_loss: 9.0328 - val_accuracy: 0.0049 Epoch 3/30 256/256 [==============================] - ETA: 0s - loss: 5.1304 - accuracy: 0.0203 Epoch 00003: val_accuracy improved from 0.00636 to 0.01468, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 61s 238ms/step - loss: 5.1304 - accuracy: 0.0203 - val_loss: 6.3688 - val_accuracy: 0.0147 Epoch 4/30 256/256 [==============================] - ETA: 0s - loss: 4.9752 - accuracy: 0.0236 Epoch 00004: val_accuracy did not improve from 0.01468 256/256 [==============================] - 61s 237ms/step - loss: 4.9752 - accuracy: 0.0236 - val_loss: 5.9712 - val_accuracy: 0.0142 Epoch 5/30 256/256 [==============================] - ETA: 0s - loss: 4.8637 - accuracy: 0.0262 Epoch 00005: val_accuracy did not improve from 0.01468 256/256 [==============================] - 61s 238ms/step - loss: 4.8637 - accuracy: 0.0262 - val_loss: 5.6067 - val_accuracy: 0.0147 Epoch 6/30 256/256 [==============================] - ETA: 0s - loss: 4.7570 - accuracy: 0.0304 Epoch 00006: val_accuracy did not improve from 0.01468 256/256 [==============================] - 61s 237ms/step - loss: 4.7570 - accuracy: 0.0304 - val_loss: 5.8719 - val_accuracy: 0.0127 Epoch 7/30 256/256 [==============================] - ETA: 0s - loss: 4.6712 - accuracy: 0.0346 Epoch 00007: val_accuracy improved from 0.01468 to 0.02202, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 61s 240ms/step - loss: 4.6712 - accuracy: 0.0346 - val_loss: 5.6657 - val_accuracy: 0.0220 Epoch 8/30 256/256 [==============================] - ETA: 0s - loss: 4.6036 - accuracy: 0.0373 Epoch 00008: val_accuracy did not improve from 0.02202 256/256 [==============================] - 61s 239ms/step - loss: 4.6036 - accuracy: 0.0373 - val_loss: 5.6978 - val_accuracy: 0.0201 Epoch 9/30 256/256 [==============================] - ETA: 0s - loss: 4.5490 - accuracy: 0.0389 Epoch 00009: val_accuracy improved from 0.02202 to 0.02446, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 62s 241ms/step - loss: 4.5490 - accuracy: 0.0389 - val_loss: 5.1021 - val_accuracy: 0.0245 Epoch 10/30 256/256 [==============================] - ETA: 0s - loss: 4.4928 - accuracy: 0.0363 Epoch 00010: val_accuracy improved from 0.02446 to 0.02789, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 63s 247ms/step - loss: 4.4928 - accuracy: 0.0363 - val_loss: 4.9479 - val_accuracy: 0.0279 Epoch 11/30 256/256 [==============================] - ETA: 0s - loss: 4.4440 - accuracy: 0.0448 Epoch 00011: val_accuracy did not improve from 0.02789 256/256 [==============================] - 62s 241ms/step - loss: 4.4440 - accuracy: 0.0448 - val_loss: 4.9858 - val_accuracy: 0.0269 Epoch 12/30 256/256 [==============================] - ETA: 0s - loss: 4.3945 - accuracy: 0.0460 Epoch 00012: val_accuracy did not improve from 0.02789 256/256 [==============================] - 62s 241ms/step - loss: 4.3945 - accuracy: 0.0460 - val_loss: 5.1539 - val_accuracy: 0.0235 Epoch 13/30 256/256 [==============================] - ETA: 0s - loss: 4.3589 - accuracy: 0.0533 Epoch 00013: val_accuracy improved from 0.02789 to 0.03718, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 62s 244ms/step - loss: 4.3589 - accuracy: 0.0533 - val_loss: 4.5179 - val_accuracy: 0.0372 Epoch 14/30 256/256 [==============================] - ETA: 0s - loss: 4.3049 - accuracy: 0.0544 Epoch 00014: val_accuracy did not improve from 0.03718 256/256 [==============================] - 62s 242ms/step - loss: 4.3049 - accuracy: 0.0544 - val_loss: 4.8841 - val_accuracy: 0.0166 Epoch 15/30 256/256 [==============================] - ETA: 0s - loss: 4.2809 - accuracy: 0.0584 Epoch 00015: val_accuracy improved from 0.03718 to 0.04256, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 63s 246ms/step - loss: 4.2809 - accuracy: 0.0584 - val_loss: 4.5132 - val_accuracy: 0.0426 Epoch 16/30 256/256 [==============================] - ETA: 0s - loss: 4.2399 - accuracy: 0.0582 Epoch 00016: val_accuracy did not improve from 0.04256 256/256 [==============================] - 62s 242ms/step - loss: 4.2399 - accuracy: 0.0582 - val_loss: 4.5616 - val_accuracy: 0.0362 Epoch 17/30 256/256 [==============================] - ETA: 0s - loss: 4.2231 - accuracy: 0.0591 Epoch 00017: val_accuracy did not improve from 0.04256 256/256 [==============================] - 61s 240ms/step - loss: 4.2231 - accuracy: 0.0591 - val_loss: 4.8450 - val_accuracy: 0.0298 Epoch 18/30 256/256 [==============================] - ETA: 0s - loss: 4.1819 - accuracy: 0.0658 Epoch 00018: val_accuracy did not improve from 0.04256 256/256 [==============================] - 61s 240ms/step - loss: 4.1819 - accuracy: 0.0658 - val_loss: 5.4053 - val_accuracy: 0.0166 Epoch 19/30 256/256 [==============================] - ETA: 0s - loss: 4.1330 - accuracy: 0.0697 Epoch 00019: val_accuracy improved from 0.04256 to 0.04746, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 62s 242ms/step - loss: 4.1330 - accuracy: 0.0697 - val_loss: 4.5918 - val_accuracy: 0.0475 Epoch 20/30 256/256 [==============================] - ETA: 0s - loss: 4.1109 - accuracy: 0.0747 Epoch 00020: val_accuracy improved from 0.04746 to 0.05333, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 63s 246ms/step - loss: 4.1109 - accuracy: 0.0747 - val_loss: 4.4121 - val_accuracy: 0.0533 Epoch 21/30 256/256 [==============================] - ETA: 0s - loss: 4.0938 - accuracy: 0.0758 Epoch 00021: val_accuracy did not improve from 0.05333 256/256 [==============================] - 61s 240ms/step - loss: 4.0938 - accuracy: 0.0758 - val_loss: 4.5760 - val_accuracy: 0.0401 Epoch 22/30 256/256 [==============================] - ETA: 0s - loss: 4.0641 - accuracy: 0.0789 Epoch 00022: val_accuracy improved from 0.05333 to 0.06213, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 62s 243ms/step - loss: 4.0641 - accuracy: 0.0789 - val_loss: 4.2235 - val_accuracy: 0.0621 Epoch 23/30 256/256 [==============================] - ETA: 0s - loss: 4.0410 - accuracy: 0.0860 Epoch 00023: val_accuracy did not improve from 0.06213 256/256 [==============================] - 62s 241ms/step - loss: 4.0410 - accuracy: 0.0860 - val_loss: 4.4727 - val_accuracy: 0.0558 Epoch 24/30 256/256 [==============================] - ETA: 0s - loss: 4.0078 - accuracy: 0.0855 Epoch 00024: val_accuracy did not improve from 0.06213 256/256 [==============================] - 61s 239ms/step - loss: 4.0078 - accuracy: 0.0855 - val_loss: 4.9343 - val_accuracy: 0.0382 Epoch 25/30 256/256 [==============================] - ETA: 0s - loss: 3.9987 - accuracy: 0.0900 Epoch 00025: val_accuracy did not improve from 0.06213 256/256 [==============================] - 62s 241ms/step - loss: 3.9987 - accuracy: 0.0900 - val_loss: 4.7121 - val_accuracy: 0.0386 Epoch 26/30 256/256 [==============================] - ETA: 0s - loss: 3.9782 - accuracy: 0.0918 Epoch 00026: val_accuracy did not improve from 0.06213 256/256 [==============================] - 61s 238ms/step - loss: 3.9782 - accuracy: 0.0918 - val_loss: 4.5034 - val_accuracy: 0.0489 Epoch 27/30 256/256 [==============================] - ETA: 0s - loss: 3.9460 - accuracy: 0.0973 Epoch 00027: val_accuracy did not improve from 0.06213 256/256 [==============================] - 60s 236ms/step - loss: 3.9460 - accuracy: 0.0973 - val_loss: 4.5363 - val_accuracy: 0.0558 Epoch 28/30 256/256 [==============================] - ETA: 0s - loss: 3.9367 - accuracy: 0.0928 Epoch 00028: val_accuracy did not improve from 0.06213 256/256 [==============================] - 60s 236ms/step - loss: 3.9367 - accuracy: 0.0928 - val_loss: 4.7892 - val_accuracy: 0.0460 Epoch 29/30 256/256 [==============================] - ETA: 0s - loss: 3.9191 - accuracy: 0.0954 Epoch 00029: val_accuracy improved from 0.06213 to 0.07339, saving model to /content/drive/MyDrive/Data/Models/Model_4.h5 256/256 [==============================] - 61s 240ms/step - loss: 3.9191 - accuracy: 0.0954 - val_loss: 4.2080 - val_accuracy: 0.0734 Epoch 30/30 256/256 [==============================] - ETA: 0s - loss: 3.8963 - accuracy: 0.1008 Epoch 00030: val_accuracy did not improve from 0.07339 256/256 [==============================] - 62s 241ms/step - loss: 3.8963 - accuracy: 0.1008 - val_loss: 4.4681 - val_accuracy: 0.0533
<Figure size 432x288 with 0 Axes>
64/64 [==============================] - 11s 177ms/step - loss: 4.2139 - accuracy: 0.0709 Start FOLD Number 5 Found 8178 validated image filenames belonging to 120 classes. Found 2044 validated image filenames belonging to 120 classes. Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 98, 98, 32) 896 _________________________________________________________________ batch_normalization (BatchNo (None, 98, 98, 32) 128 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 49, 49, 32) 0 _________________________________________________________________ dropout (Dropout) (None, 49, 49, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 47, 47, 64) 18496 _________________________________________________________________ batch_normalization_1 (Batch (None, 47, 47, 64) 256 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 23, 23, 64) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 23, 23, 64) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 21, 21, 128) 73856 _________________________________________________________________ batch_normalization_2 (Batch (None, 21, 21, 128) 512 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 10, 10, 128) 0 _________________________________________________________________ dropout_2 (Dropout) (None, 10, 10, 128) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 8, 8, 512) 590336 _________________________________________________________________ batch_normalization_3 (Batch (None, 8, 8, 512) 2048 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 4, 4, 512) 0 _________________________________________________________________ dropout_3 (Dropout) (None, 4, 4, 512) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 2, 2, 1024) 4719616 _________________________________________________________________ batch_normalization_4 (Batch (None, 2, 2, 1024) 4096 _________________________________________________________________ max_pooling2d_4 (MaxPooling2 (None, 1, 1, 1024) 0 _________________________________________________________________ dropout_4 (Dropout) (None, 1, 1, 1024) 0 _________________________________________________________________ flatten (Flatten) (None, 1024) 0 _________________________________________________________________ dense (Dense) (None, 1024) 1049600 _________________________________________________________________ batch_normalization_5 (Batch (None, 1024) 4096 _________________________________________________________________ dropout_5 (Dropout) (None, 1024) 0 _________________________________________________________________ dense_1 (Dense) (None, 512) 524800 _________________________________________________________________ batch_normalization_6 (Batch (None, 512) 2048 _________________________________________________________________ dropout_6 (Dropout) (None, 512) 0 _________________________________________________________________ dense_2 (Dense) (None, 120) 61560 ================================================================= Total params: 7,052,344 Trainable params: 7,045,752 Non-trainable params: 6,592 _________________________________________________________________ Epoch 1/30 256/256 [==============================] - ETA: 0s - loss: 5.7799 - accuracy: 0.0122 Epoch 00001: val_accuracy improved from -inf to 0.00685, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 66s 259ms/step - loss: 5.7799 - accuracy: 0.0122 - val_loss: 12.8440 - val_accuracy: 0.0068 Epoch 2/30 256/256 [==============================] - ETA: 0s - loss: 5.3199 - accuracy: 0.0144 Epoch 00002: val_accuracy did not improve from 0.00685 256/256 [==============================] - 60s 236ms/step - loss: 5.3199 - accuracy: 0.0144 - val_loss: 9.3651 - val_accuracy: 0.0068 Epoch 3/30 256/256 [==============================] - ETA: 0s - loss: 5.1136 - accuracy: 0.0181 Epoch 00003: val_accuracy improved from 0.00685 to 0.00930, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 61s 239ms/step - loss: 5.1136 - accuracy: 0.0181 - val_loss: 6.9951 - val_accuracy: 0.0093 Epoch 4/30 256/256 [==============================] - ETA: 0s - loss: 4.9699 - accuracy: 0.0237 Epoch 00004: val_accuracy did not improve from 0.00930 256/256 [==============================] - 60s 236ms/step - loss: 4.9699 - accuracy: 0.0237 - val_loss: 6.6423 - val_accuracy: 0.0088 Epoch 5/30 256/256 [==============================] - ETA: 0s - loss: 4.8660 - accuracy: 0.0227 Epoch 00005: val_accuracy improved from 0.00930 to 0.01761, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 62s 241ms/step - loss: 4.8660 - accuracy: 0.0227 - val_loss: 6.0980 - val_accuracy: 0.0176 Epoch 6/30 256/256 [==============================] - ETA: 0s - loss: 4.7809 - accuracy: 0.0259 Epoch 00006: val_accuracy did not improve from 0.01761 256/256 [==============================] - 60s 235ms/step - loss: 4.7809 - accuracy: 0.0259 - val_loss: 5.7521 - val_accuracy: 0.0137 Epoch 7/30 256/256 [==============================] - ETA: 0s - loss: 4.6831 - accuracy: 0.0315 Epoch 00007: val_accuracy improved from 0.01761 to 0.02446, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 61s 238ms/step - loss: 4.6831 - accuracy: 0.0315 - val_loss: 5.2041 - val_accuracy: 0.0245 Epoch 8/30 256/256 [==============================] - ETA: 0s - loss: 4.6171 - accuracy: 0.0336 Epoch 00008: val_accuracy improved from 0.02446 to 0.02642, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 61s 240ms/step - loss: 4.6171 - accuracy: 0.0336 - val_loss: 4.8522 - val_accuracy: 0.0264 Epoch 9/30 256/256 [==============================] - ETA: 0s - loss: 4.5595 - accuracy: 0.0342 Epoch 00009: val_accuracy did not improve from 0.02642 256/256 [==============================] - 60s 236ms/step - loss: 4.5595 - accuracy: 0.0342 - val_loss: 5.1199 - val_accuracy: 0.0215 Epoch 10/30 256/256 [==============================] - ETA: 0s - loss: 4.5012 - accuracy: 0.0406 Epoch 00010: val_accuracy did not improve from 0.02642 256/256 [==============================] - 61s 238ms/step - loss: 4.5012 - accuracy: 0.0406 - val_loss: 5.0836 - val_accuracy: 0.0210 Epoch 11/30 256/256 [==============================] - ETA: 0s - loss: 4.4553 - accuracy: 0.0416 Epoch 00011: val_accuracy improved from 0.02642 to 0.02789, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 61s 238ms/step - loss: 4.4553 - accuracy: 0.0416 - val_loss: 4.7989 - val_accuracy: 0.0279 Epoch 12/30 256/256 [==============================] - ETA: 0s - loss: 4.4130 - accuracy: 0.0435 Epoch 00012: val_accuracy improved from 0.02789 to 0.03718, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 61s 239ms/step - loss: 4.4130 - accuracy: 0.0435 - val_loss: 4.6489 - val_accuracy: 0.0372 Epoch 13/30 256/256 [==============================] - ETA: 0s - loss: 4.3692 - accuracy: 0.0483 Epoch 00013: val_accuracy did not improve from 0.03718 256/256 [==============================] - 60s 235ms/step - loss: 4.3692 - accuracy: 0.0483 - val_loss: 5.0628 - val_accuracy: 0.0201 Epoch 14/30 256/256 [==============================] - ETA: 0s - loss: 4.3323 - accuracy: 0.0504 Epoch 00014: val_accuracy did not improve from 0.03718 256/256 [==============================] - 60s 234ms/step - loss: 4.3323 - accuracy: 0.0504 - val_loss: 4.8542 - val_accuracy: 0.0308 Epoch 15/30 256/256 [==============================] - ETA: 0s - loss: 4.3068 - accuracy: 0.0518 Epoch 00015: val_accuracy improved from 0.03718 to 0.04159, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 61s 240ms/step - loss: 4.3068 - accuracy: 0.0518 - val_loss: 4.6840 - val_accuracy: 0.0416 Epoch 16/30 256/256 [==============================] - ETA: 0s - loss: 4.2579 - accuracy: 0.0537 Epoch 00016: val_accuracy did not improve from 0.04159 256/256 [==============================] - 60s 235ms/step - loss: 4.2579 - accuracy: 0.0537 - val_loss: 4.7046 - val_accuracy: 0.0362 Epoch 17/30 256/256 [==============================] - ETA: 0s - loss: 4.2357 - accuracy: 0.0538 Epoch 00017: val_accuracy did not improve from 0.04159 256/256 [==============================] - 60s 234ms/step - loss: 4.2357 - accuracy: 0.0538 - val_loss: 5.1798 - val_accuracy: 0.0289 Epoch 18/30 256/256 [==============================] - ETA: 0s - loss: 4.1979 - accuracy: 0.0619 Epoch 00018: val_accuracy improved from 0.04159 to 0.05235, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 61s 237ms/step - loss: 4.1979 - accuracy: 0.0619 - val_loss: 4.5434 - val_accuracy: 0.0523 Epoch 19/30 256/256 [==============================] - ETA: 0s - loss: 4.1727 - accuracy: 0.0622 Epoch 00019: val_accuracy did not improve from 0.05235 256/256 [==============================] - 60s 235ms/step - loss: 4.1727 - accuracy: 0.0622 - val_loss: 4.6097 - val_accuracy: 0.0435 Epoch 20/30 256/256 [==============================] - ETA: 0s - loss: 4.1397 - accuracy: 0.0652 Epoch 00020: val_accuracy did not improve from 0.05235 256/256 [==============================] - 61s 237ms/step - loss: 4.1397 - accuracy: 0.0652 - val_loss: 4.9970 - val_accuracy: 0.0250 Epoch 21/30 256/256 [==============================] - ETA: 0s - loss: 4.1042 - accuracy: 0.0719 Epoch 00021: val_accuracy did not improve from 0.05235 256/256 [==============================] - 60s 234ms/step - loss: 4.1042 - accuracy: 0.0719 - val_loss: 4.6253 - val_accuracy: 0.0450 Epoch 22/30 256/256 [==============================] - ETA: 0s - loss: 4.0807 - accuracy: 0.0748 Epoch 00022: val_accuracy did not improve from 0.05235 256/256 [==============================] - 60s 234ms/step - loss: 4.0807 - accuracy: 0.0748 - val_loss: 4.6361 - val_accuracy: 0.0504 Epoch 23/30 256/256 [==============================] - ETA: 0s - loss: 4.0626 - accuracy: 0.0774 Epoch 00023: val_accuracy did not improve from 0.05235 256/256 [==============================] - 60s 234ms/step - loss: 4.0626 - accuracy: 0.0774 - val_loss: 4.8039 - val_accuracy: 0.0386 Epoch 24/30 256/256 [==============================] - ETA: 0s - loss: 4.0406 - accuracy: 0.0798 Epoch 00024: val_accuracy improved from 0.05235 to 0.05920, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 61s 237ms/step - loss: 4.0406 - accuracy: 0.0798 - val_loss: 4.3541 - val_accuracy: 0.0592 Epoch 25/30 256/256 [==============================] - ETA: 0s - loss: 4.0085 - accuracy: 0.0863 Epoch 00025: val_accuracy did not improve from 0.05920 256/256 [==============================] - 61s 239ms/step - loss: 4.0085 - accuracy: 0.0863 - val_loss: 4.6016 - val_accuracy: 0.0445 Epoch 26/30 256/256 [==============================] - ETA: 0s - loss: 3.9818 - accuracy: 0.0917 Epoch 00026: val_accuracy did not improve from 0.05920 256/256 [==============================] - 60s 235ms/step - loss: 3.9818 - accuracy: 0.0917 - val_loss: 4.6710 - val_accuracy: 0.0572 Epoch 27/30 256/256 [==============================] - ETA: 0s - loss: 3.9733 - accuracy: 0.0887 Epoch 00027: val_accuracy improved from 0.05920 to 0.06605, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 61s 238ms/step - loss: 3.9733 - accuracy: 0.0887 - val_loss: 4.4469 - val_accuracy: 0.0660 Epoch 28/30 256/256 [==============================] - ETA: 0s - loss: 3.9389 - accuracy: 0.0916 Epoch 00028: val_accuracy did not improve from 0.06605 256/256 [==============================] - 60s 235ms/step - loss: 3.9389 - accuracy: 0.0916 - val_loss: 4.5790 - val_accuracy: 0.0514 Epoch 29/30 256/256 [==============================] - ETA: 0s - loss: 3.9253 - accuracy: 0.0900 Epoch 00029: val_accuracy did not improve from 0.06605 256/256 [==============================] - 60s 235ms/step - loss: 3.9253 - accuracy: 0.0900 - val_loss: 4.5009 - val_accuracy: 0.0651 Epoch 30/30 256/256 [==============================] - ETA: 0s - loss: 3.8873 - accuracy: 0.1041 Epoch 00030: val_accuracy improved from 0.06605 to 0.06947, saving model to /content/drive/MyDrive/Data/Models/Model_5.h5 256/256 [==============================] - 62s 242ms/step - loss: 3.8873 - accuracy: 0.1041 - val_loss: 4.5108 - val_accuracy: 0.0695
<Figure size 432x288 with 0 Axes>
64/64 [==============================] - 12s 187ms/step - loss: 4.5184 - accuracy: 0.0705
<tensorflow.python.keras.engine.sequential.Sequential at 0x7f34b1e729e8>
<Figure size 432x288 with 0 Axes>
By the result we see that although in this model the accuracy values are lower but also the level of overfitting is significantly smaller. We assume that during augmentation training it takes the model longer to learn and if we increase the amount of epochs for each fold (this can be done since there is no overfitting now) we will get higher accuracy values.
1.Adding convolution layers containing number of filters in ascending order (from 32 to 2048).
2.increase the drop-out value from 0.2 to 0.5.
3.reduce the dimensions of the input images to 300x300.
4.added a layer of Batch Normalization after each layer of convolution and after each layer of Dense.
5.added several Dense layers at the end of the model.
1.reduce the dimensions of the input images less than 300x300.
2.change the MaxPool to GlobalPool
3.increase the amount of epochs for each fold
first_model_input_size = "(375,375,3)"
second_model_input_size = "(300,300,3)"
third_model_input_size = "(300,300,3)"
forth_model_input_size = "(100,100,3)"
summariz_df = pd.DataFrame(columns=[ '' , 'Basic Model' , 'Improvement Model' , 'Final Model' , 'Augmantation Model'])
summariz_df[''] = ['Input Size' , 'Validation Accuracy' , 'Validation Loss']
summariz_df.set_index('' , inplace=True)
summariz_df.loc['Input Size' , 'Basic Model'] = first_model_input_size
summariz_df.loc['Validation Accuracy' , 'Basic Model'] = 0.0210
summariz_df.loc['Validation Loss' , 'Basic Model'] = 41.5699
summariz_df.loc['Input Size' , 'Improvement Model'] = second_model_input_size
summariz_df.loc['Validation Accuracy' , 'Improvement Model' ] =0.0313
summariz_df.loc['Validation Loss' , 'Improvement Model' ] = 5.3848
summariz_df.loc['Input Size' , 'Final Model'] = third_model_input_size
summariz_df.loc['Validation Accuracy' , 'Final Model'] = 0.1663
summariz_df.loc['Validation Loss' , 'Final Model'] = 3.9642
summariz_df.loc['Input Size' , 'Augmantation Model'] = forth_model_input_size
summariz_df.loc['Validation Accuracy' , 'Augmantation Model'] =0.0705
summariz_df.loc['Validation Loss' , 'Augmantation Model'] = 4.5184
summariz_df
| Basic Model | Improvement Model | Final Model | Augmantation Model | |
|---|---|---|---|---|
| Input Size | (375,375,3) | (300,300,3) | (300,300,3) | (100,100,3) |
| Validation Accuracy | 0.021 | 0.0313 | 0.1663 | 0.0705 |
| Validation Loss | 41.5699 | 5.3848 | 3.9642 | 4.5184 |
def Get_Xception_Model_With_Last_Layer():
xception = tf.keras.applications.Xception(
include_top=False,
weights='imagenet',
input_shape=(224,224,3),
pooling='max',
)
x = Dense(120,activation='softmax',name = 'predictions_final')(xception.layers[-1].output)
model = Model(inputs = xception.input,outputs = x)
model.summary()
return model
model = Get_Xception_Model_With_Last_Layer()
Model: "functional_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 224, 224, 3) 0
__________________________________________________________________________________________________
block1_conv1 (Conv2D) (None, 111, 111, 32) 864 input_1[0][0]
__________________________________________________________________________________________________
block1_conv1_bn (BatchNormaliza (None, 111, 111, 32) 128 block1_conv1[0][0]
__________________________________________________________________________________________________
block1_conv1_act (Activation) (None, 111, 111, 32) 0 block1_conv1_bn[0][0]
__________________________________________________________________________________________________
block1_conv2 (Conv2D) (None, 109, 109, 64) 18432 block1_conv1_act[0][0]
__________________________________________________________________________________________________
block1_conv2_bn (BatchNormaliza (None, 109, 109, 64) 256 block1_conv2[0][0]
__________________________________________________________________________________________________
block1_conv2_act (Activation) (None, 109, 109, 64) 0 block1_conv2_bn[0][0]
__________________________________________________________________________________________________
block2_sepconv1 (SeparableConv2 (None, 109, 109, 128 8768 block1_conv2_act[0][0]
__________________________________________________________________________________________________
block2_sepconv1_bn (BatchNormal (None, 109, 109, 128 512 block2_sepconv1[0][0]
__________________________________________________________________________________________________
block2_sepconv2_act (Activation (None, 109, 109, 128 0 block2_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block2_sepconv2 (SeparableConv2 (None, 109, 109, 128 17536 block2_sepconv2_act[0][0]
__________________________________________________________________________________________________
block2_sepconv2_bn (BatchNormal (None, 109, 109, 128 512 block2_sepconv2[0][0]
__________________________________________________________________________________________________
conv2d (Conv2D) (None, 55, 55, 128) 8192 block1_conv2_act[0][0]
__________________________________________________________________________________________________
block2_pool (MaxPooling2D) (None, 55, 55, 128) 0 block2_sepconv2_bn[0][0]
__________________________________________________________________________________________________
batch_normalization (BatchNorma (None, 55, 55, 128) 512 conv2d[0][0]
__________________________________________________________________________________________________
add (Add) (None, 55, 55, 128) 0 block2_pool[0][0]
batch_normalization[0][0]
__________________________________________________________________________________________________
block3_sepconv1_act (Activation (None, 55, 55, 128) 0 add[0][0]
__________________________________________________________________________________________________
block3_sepconv1 (SeparableConv2 (None, 55, 55, 256) 33920 block3_sepconv1_act[0][0]
__________________________________________________________________________________________________
block3_sepconv1_bn (BatchNormal (None, 55, 55, 256) 1024 block3_sepconv1[0][0]
__________________________________________________________________________________________________
block3_sepconv2_act (Activation (None, 55, 55, 256) 0 block3_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block3_sepconv2 (SeparableConv2 (None, 55, 55, 256) 67840 block3_sepconv2_act[0][0]
__________________________________________________________________________________________________
block3_sepconv2_bn (BatchNormal (None, 55, 55, 256) 1024 block3_sepconv2[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 28, 28, 256) 32768 add[0][0]
__________________________________________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0 block3_sepconv2_bn[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 28, 28, 256) 1024 conv2d_1[0][0]
__________________________________________________________________________________________________
add_1 (Add) (None, 28, 28, 256) 0 block3_pool[0][0]
batch_normalization_1[0][0]
__________________________________________________________________________________________________
block4_sepconv1_act (Activation (None, 28, 28, 256) 0 add_1[0][0]
__________________________________________________________________________________________________
block4_sepconv1 (SeparableConv2 (None, 28, 28, 728) 188672 block4_sepconv1_act[0][0]
__________________________________________________________________________________________________
block4_sepconv1_bn (BatchNormal (None, 28, 28, 728) 2912 block4_sepconv1[0][0]
__________________________________________________________________________________________________
block4_sepconv2_act (Activation (None, 28, 28, 728) 0 block4_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block4_sepconv2 (SeparableConv2 (None, 28, 28, 728) 536536 block4_sepconv2_act[0][0]
__________________________________________________________________________________________________
block4_sepconv2_bn (BatchNormal (None, 28, 28, 728) 2912 block4_sepconv2[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 14, 14, 728) 186368 add_1[0][0]
__________________________________________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 728) 0 block4_sepconv2_bn[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 14, 14, 728) 2912 conv2d_2[0][0]
__________________________________________________________________________________________________
add_2 (Add) (None, 14, 14, 728) 0 block4_pool[0][0]
batch_normalization_2[0][0]
__________________________________________________________________________________________________
block5_sepconv1_act (Activation (None, 14, 14, 728) 0 add_2[0][0]
__________________________________________________________________________________________________
block5_sepconv1 (SeparableConv2 (None, 14, 14, 728) 536536 block5_sepconv1_act[0][0]
__________________________________________________________________________________________________
block5_sepconv1_bn (BatchNormal (None, 14, 14, 728) 2912 block5_sepconv1[0][0]
__________________________________________________________________________________________________
block5_sepconv2_act (Activation (None, 14, 14, 728) 0 block5_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block5_sepconv2 (SeparableConv2 (None, 14, 14, 728) 536536 block5_sepconv2_act[0][0]
__________________________________________________________________________________________________
block5_sepconv2_bn (BatchNormal (None, 14, 14, 728) 2912 block5_sepconv2[0][0]
__________________________________________________________________________________________________
block5_sepconv3_act (Activation (None, 14, 14, 728) 0 block5_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block5_sepconv3 (SeparableConv2 (None, 14, 14, 728) 536536 block5_sepconv3_act[0][0]
__________________________________________________________________________________________________
block5_sepconv3_bn (BatchNormal (None, 14, 14, 728) 2912 block5_sepconv3[0][0]
__________________________________________________________________________________________________
add_3 (Add) (None, 14, 14, 728) 0 block5_sepconv3_bn[0][0]
add_2[0][0]
__________________________________________________________________________________________________
block6_sepconv1_act (Activation (None, 14, 14, 728) 0 add_3[0][0]
__________________________________________________________________________________________________
block6_sepconv1 (SeparableConv2 (None, 14, 14, 728) 536536 block6_sepconv1_act[0][0]
__________________________________________________________________________________________________
block6_sepconv1_bn (BatchNormal (None, 14, 14, 728) 2912 block6_sepconv1[0][0]
__________________________________________________________________________________________________
block6_sepconv2_act (Activation (None, 14, 14, 728) 0 block6_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block6_sepconv2 (SeparableConv2 (None, 14, 14, 728) 536536 block6_sepconv2_act[0][0]
__________________________________________________________________________________________________
block6_sepconv2_bn (BatchNormal (None, 14, 14, 728) 2912 block6_sepconv2[0][0]
__________________________________________________________________________________________________
block6_sepconv3_act (Activation (None, 14, 14, 728) 0 block6_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block6_sepconv3 (SeparableConv2 (None, 14, 14, 728) 536536 block6_sepconv3_act[0][0]
__________________________________________________________________________________________________
block6_sepconv3_bn (BatchNormal (None, 14, 14, 728) 2912 block6_sepconv3[0][0]
__________________________________________________________________________________________________
add_4 (Add) (None, 14, 14, 728) 0 block6_sepconv3_bn[0][0]
add_3[0][0]
__________________________________________________________________________________________________
block7_sepconv1_act (Activation (None, 14, 14, 728) 0 add_4[0][0]
__________________________________________________________________________________________________
block7_sepconv1 (SeparableConv2 (None, 14, 14, 728) 536536 block7_sepconv1_act[0][0]
__________________________________________________________________________________________________
block7_sepconv1_bn (BatchNormal (None, 14, 14, 728) 2912 block7_sepconv1[0][0]
__________________________________________________________________________________________________
block7_sepconv2_act (Activation (None, 14, 14, 728) 0 block7_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block7_sepconv2 (SeparableConv2 (None, 14, 14, 728) 536536 block7_sepconv2_act[0][0]
__________________________________________________________________________________________________
block7_sepconv2_bn (BatchNormal (None, 14, 14, 728) 2912 block7_sepconv2[0][0]
__________________________________________________________________________________________________
block7_sepconv3_act (Activation (None, 14, 14, 728) 0 block7_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block7_sepconv3 (SeparableConv2 (None, 14, 14, 728) 536536 block7_sepconv3_act[0][0]
__________________________________________________________________________________________________
block7_sepconv3_bn (BatchNormal (None, 14, 14, 728) 2912 block7_sepconv3[0][0]
__________________________________________________________________________________________________
add_5 (Add) (None, 14, 14, 728) 0 block7_sepconv3_bn[0][0]
add_4[0][0]
__________________________________________________________________________________________________
block8_sepconv1_act (Activation (None, 14, 14, 728) 0 add_5[0][0]
__________________________________________________________________________________________________
block8_sepconv1 (SeparableConv2 (None, 14, 14, 728) 536536 block8_sepconv1_act[0][0]
__________________________________________________________________________________________________
block8_sepconv1_bn (BatchNormal (None, 14, 14, 728) 2912 block8_sepconv1[0][0]
__________________________________________________________________________________________________
block8_sepconv2_act (Activation (None, 14, 14, 728) 0 block8_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block8_sepconv2 (SeparableConv2 (None, 14, 14, 728) 536536 block8_sepconv2_act[0][0]
__________________________________________________________________________________________________
block8_sepconv2_bn (BatchNormal (None, 14, 14, 728) 2912 block8_sepconv2[0][0]
__________________________________________________________________________________________________
block8_sepconv3_act (Activation (None, 14, 14, 728) 0 block8_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block8_sepconv3 (SeparableConv2 (None, 14, 14, 728) 536536 block8_sepconv3_act[0][0]
__________________________________________________________________________________________________
block8_sepconv3_bn (BatchNormal (None, 14, 14, 728) 2912 block8_sepconv3[0][0]
__________________________________________________________________________________________________
add_6 (Add) (None, 14, 14, 728) 0 block8_sepconv3_bn[0][0]
add_5[0][0]
__________________________________________________________________________________________________
block9_sepconv1_act (Activation (None, 14, 14, 728) 0 add_6[0][0]
__________________________________________________________________________________________________
block9_sepconv1 (SeparableConv2 (None, 14, 14, 728) 536536 block9_sepconv1_act[0][0]
__________________________________________________________________________________________________
block9_sepconv1_bn (BatchNormal (None, 14, 14, 728) 2912 block9_sepconv1[0][0]
__________________________________________________________________________________________________
block9_sepconv2_act (Activation (None, 14, 14, 728) 0 block9_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block9_sepconv2 (SeparableConv2 (None, 14, 14, 728) 536536 block9_sepconv2_act[0][0]
__________________________________________________________________________________________________
block9_sepconv2_bn (BatchNormal (None, 14, 14, 728) 2912 block9_sepconv2[0][0]
__________________________________________________________________________________________________
block9_sepconv3_act (Activation (None, 14, 14, 728) 0 block9_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block9_sepconv3 (SeparableConv2 (None, 14, 14, 728) 536536 block9_sepconv3_act[0][0]
__________________________________________________________________________________________________
block9_sepconv3_bn (BatchNormal (None, 14, 14, 728) 2912 block9_sepconv3[0][0]
__________________________________________________________________________________________________
add_7 (Add) (None, 14, 14, 728) 0 block9_sepconv3_bn[0][0]
add_6[0][0]
__________________________________________________________________________________________________
block10_sepconv1_act (Activatio (None, 14, 14, 728) 0 add_7[0][0]
__________________________________________________________________________________________________
block10_sepconv1 (SeparableConv (None, 14, 14, 728) 536536 block10_sepconv1_act[0][0]
__________________________________________________________________________________________________
block10_sepconv1_bn (BatchNorma (None, 14, 14, 728) 2912 block10_sepconv1[0][0]
__________________________________________________________________________________________________
block10_sepconv2_act (Activatio (None, 14, 14, 728) 0 block10_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block10_sepconv2 (SeparableConv (None, 14, 14, 728) 536536 block10_sepconv2_act[0][0]
__________________________________________________________________________________________________
block10_sepconv2_bn (BatchNorma (None, 14, 14, 728) 2912 block10_sepconv2[0][0]
__________________________________________________________________________________________________
block10_sepconv3_act (Activatio (None, 14, 14, 728) 0 block10_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block10_sepconv3 (SeparableConv (None, 14, 14, 728) 536536 block10_sepconv3_act[0][0]
__________________________________________________________________________________________________
block10_sepconv3_bn (BatchNorma (None, 14, 14, 728) 2912 block10_sepconv3[0][0]
__________________________________________________________________________________________________
add_8 (Add) (None, 14, 14, 728) 0 block10_sepconv3_bn[0][0]
add_7[0][0]
__________________________________________________________________________________________________
block11_sepconv1_act (Activatio (None, 14, 14, 728) 0 add_8[0][0]
__________________________________________________________________________________________________
block11_sepconv1 (SeparableConv (None, 14, 14, 728) 536536 block11_sepconv1_act[0][0]
__________________________________________________________________________________________________
block11_sepconv1_bn (BatchNorma (None, 14, 14, 728) 2912 block11_sepconv1[0][0]
__________________________________________________________________________________________________
block11_sepconv2_act (Activatio (None, 14, 14, 728) 0 block11_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block11_sepconv2 (SeparableConv (None, 14, 14, 728) 536536 block11_sepconv2_act[0][0]
__________________________________________________________________________________________________
block11_sepconv2_bn (BatchNorma (None, 14, 14, 728) 2912 block11_sepconv2[0][0]
__________________________________________________________________________________________________
block11_sepconv3_act (Activatio (None, 14, 14, 728) 0 block11_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block11_sepconv3 (SeparableConv (None, 14, 14, 728) 536536 block11_sepconv3_act[0][0]
__________________________________________________________________________________________________
block11_sepconv3_bn (BatchNorma (None, 14, 14, 728) 2912 block11_sepconv3[0][0]
__________________________________________________________________________________________________
add_9 (Add) (None, 14, 14, 728) 0 block11_sepconv3_bn[0][0]
add_8[0][0]
__________________________________________________________________________________________________
block12_sepconv1_act (Activatio (None, 14, 14, 728) 0 add_9[0][0]
__________________________________________________________________________________________________
block12_sepconv1 (SeparableConv (None, 14, 14, 728) 536536 block12_sepconv1_act[0][0]
__________________________________________________________________________________________________
block12_sepconv1_bn (BatchNorma (None, 14, 14, 728) 2912 block12_sepconv1[0][0]
__________________________________________________________________________________________________
block12_sepconv2_act (Activatio (None, 14, 14, 728) 0 block12_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block12_sepconv2 (SeparableConv (None, 14, 14, 728) 536536 block12_sepconv2_act[0][0]
__________________________________________________________________________________________________
block12_sepconv2_bn (BatchNorma (None, 14, 14, 728) 2912 block12_sepconv2[0][0]
__________________________________________________________________________________________________
block12_sepconv3_act (Activatio (None, 14, 14, 728) 0 block12_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block12_sepconv3 (SeparableConv (None, 14, 14, 728) 536536 block12_sepconv3_act[0][0]
__________________________________________________________________________________________________
block12_sepconv3_bn (BatchNorma (None, 14, 14, 728) 2912 block12_sepconv3[0][0]
__________________________________________________________________________________________________
add_10 (Add) (None, 14, 14, 728) 0 block12_sepconv3_bn[0][0]
add_9[0][0]
__________________________________________________________________________________________________
block13_sepconv1_act (Activatio (None, 14, 14, 728) 0 add_10[0][0]
__________________________________________________________________________________________________
block13_sepconv1 (SeparableConv (None, 14, 14, 728) 536536 block13_sepconv1_act[0][0]
__________________________________________________________________________________________________
block13_sepconv1_bn (BatchNorma (None, 14, 14, 728) 2912 block13_sepconv1[0][0]
__________________________________________________________________________________________________
block13_sepconv2_act (Activatio (None, 14, 14, 728) 0 block13_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block13_sepconv2 (SeparableConv (None, 14, 14, 1024) 752024 block13_sepconv2_act[0][0]
__________________________________________________________________________________________________
block13_sepconv2_bn (BatchNorma (None, 14, 14, 1024) 4096 block13_sepconv2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 7, 7, 1024) 745472 add_10[0][0]
__________________________________________________________________________________________________
block13_pool (MaxPooling2D) (None, 7, 7, 1024) 0 block13_sepconv2_bn[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 7, 7, 1024) 4096 conv2d_3[0][0]
__________________________________________________________________________________________________
add_11 (Add) (None, 7, 7, 1024) 0 block13_pool[0][0]
batch_normalization_3[0][0]
__________________________________________________________________________________________________
block14_sepconv1 (SeparableConv (None, 7, 7, 1536) 1582080 add_11[0][0]
__________________________________________________________________________________________________
block14_sepconv1_bn (BatchNorma (None, 7, 7, 1536) 6144 block14_sepconv1[0][0]
__________________________________________________________________________________________________
block14_sepconv1_act (Activatio (None, 7, 7, 1536) 0 block14_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block14_sepconv2 (SeparableConv (None, 7, 7, 2048) 3159552 block14_sepconv1_act[0][0]
__________________________________________________________________________________________________
block14_sepconv2_bn (BatchNorma (None, 7, 7, 2048) 8192 block14_sepconv2[0][0]
__________________________________________________________________________________________________
block14_sepconv2_act (Activatio (None, 7, 7, 2048) 0 block14_sepconv2_bn[0][0]
__________________________________________________________________________________________________
global_max_pooling2d (GlobalMax (None, 2048) 0 block14_sepconv2_act[0][0]
__________________________________________________________________________________________________
predictions_final (Dense) (None, 120) 245880 global_max_pooling2d[0][0]
==================================================================================================
Total params: 21,107,360
Trainable params: 21,052,832
Non-trainable params: 54,528
__________________________________________________________________________________________________
plt.figure(figsize = (20,40))
plt.imshow(mpimg.imread(r'C:\Users\shachar meretz\Desktop\xception_architecture_final.PNG'))
<matplotlib.image.AxesImage at 0x1a28b510c88>
fit Xception model:
We split the training set into 5 parts , wich one part being a validation set of the training process
import timeit
lables_df = pd.read_csv("/content/drive/MyDrive/Data/labels.csv" , engine="python")
main_dir = r"/content/drive/MyDrive/Data"
save_dir = r"/content/drive/MyDrive/Data"
train_df = lables_df[['breed']]
train_df['id'] = lables_df['id'] + '.jpg'
split_index = int( len(train_df)/5 ) * 4
training_data = train_df.iloc[0:split_index]
validation_data = train_df.iloc[split_index:]
datagen = ImageDataGenerator()
train_data_generator = datagen.flow_from_dataframe(training_data, directory=os.path.join(main_dir,'train'),
x_col = "id", y_col = "breed",
target_size=(224, 224),
color_mode="rgb",
batch_size=32,
class_mode = "categorical", shuffle = True)
valid_data_generator = datagen.flow_from_dataframe(validation_data, directory=os.path.join(main_dir,'train'),
x_col = "id", y_col = "breed",
target_size=(224, 224),
color_mode="rgb",
batch_size=32,
class_mode = "categorical", shuffle = True)
checkpoint = tf.keras.callbacks.ModelCheckpoint(os.path.join(save_dir,'Xception.h5'), monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
start = timeit.default_timer()
history_xc = model.fit(train_data_generator,epochs=30,callbacks=callbacks_list,validation_data=valid_data_generator)
stop = timeit.default_timer()
fig, ax = plt.subplots(1,2,figsize=(12,4))
ax[0].plot(history_xc.history['accuracy'] , color='red')
ax[0].plot(history_xc.history['val_accuracy'] , color='green')
ax[0].set_title('Model Accuracy')
ax[0].set_ylabel('Accuracy')
ax[0].set_xlabel('Epoch')
ax[0].legend(['Train', 'Validation'], loc='upper left' )
ax[1].plot(history_xc.history['loss'] , color='red')
ax[1].plot(history_xc.history['val_loss'] , color='green')
ax[1].set_title('Model Loss')
ax[1].set_ylabel('Loss')
ax[1].set_xlabel('Epoch')
ax[1].legend(['Train', 'Validation'], loc='upper left')
plt.show()
matrics = model.evaluate(valid_data_generator)
xc_val_loss = matrics[0]
xc_val_acc = matrics[1]
fit_time_xc = stop - start
print("Xceprion Fit Time : {}".format(fit_time_xc))
print("Xception Validation Accuracy : {}".format(xc_val_acc))
print("Xception Validation Loss: {}".format(xc_val_loss))
Found 8176 validated image filenames belonging to 120 classes. Found 2046 validated image filenames belonging to 120 classes. Epoch 1/30 2/256 [..............................] - ETA: 1:17 - loss: 5.1546 - accuracy: 0.0469WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.1729s vs `on_train_batch_end` time: 0.4344s). Check your callbacks. 256/256 [==============================] - ETA: 0s - loss: 3.0528 - accuracy: 0.2668 Epoch 00001: val_accuracy improved from -inf to 0.26491, saving model to /content/drive/MyDrive/Data/Xception300_3.h5 256/256 [==============================] - 792s 3s/step - loss: 3.0528 - accuracy: 0.2668 - val_loss: 4.8056 - val_accuracy: 0.2649 Epoch 2/30 256/256 [==============================] - ETA: 0s - loss: 1.4819 - accuracy: 0.5773 Epoch 00002: val_accuracy improved from 0.26491 to 0.42962, saving model to /content/drive/MyDrive/Data/Xception300_3.h5 256/256 [==============================] - 180s 702ms/step - loss: 1.4819 - accuracy: 0.5773 - val_loss: 2.1254 - val_accuracy: 0.4296 Epoch 3/30 256/256 [==============================] - ETA: 0s - loss: 0.9346 - accuracy: 0.7036 Epoch 00003: val_accuracy did not improve from 0.42962 256/256 [==============================] - 177s 691ms/step - loss: 0.9346 - accuracy: 0.7036 - val_loss: 2.4638 - val_accuracy: 0.3930 Epoch 4/30 256/256 [==============================] - ETA: 0s - loss: 0.6906 - accuracy: 0.7808 Epoch 00004: val_accuracy improved from 0.42962 to 0.51320, saving model to /content/drive/MyDrive/Data/Xception300_3.h5 256/256 [==============================] - 179s 700ms/step - loss: 0.6906 - accuracy: 0.7808 - val_loss: 1.9377 - val_accuracy: 0.5132 Epoch 5/30 256/256 [==============================] - ETA: 0s - loss: 0.5107 - accuracy: 0.8311 Epoch 00005: val_accuracy did not improve from 0.51320 256/256 [==============================] - 177s 690ms/step - loss: 0.5107 - accuracy: 0.8311 - val_loss: 3.1068 - val_accuracy: 0.4115 Epoch 6/30 256/256 [==============================] - ETA: 0s - loss: 0.4100 - accuracy: 0.8717 Epoch 00006: val_accuracy did not improve from 0.51320 256/256 [==============================] - 176s 689ms/step - loss: 0.4100 - accuracy: 0.8717 - val_loss: 2.9449 - val_accuracy: 0.4389 Epoch 7/30 256/256 [==============================] - ETA: 0s - loss: 0.3278 - accuracy: 0.8953 Epoch 00007: val_accuracy did not improve from 0.51320 256/256 [==============================] - 177s 691ms/step - loss: 0.3278 - accuracy: 0.8953 - val_loss: 3.7575 - val_accuracy: 0.3710 Epoch 8/30 256/256 [==============================] - ETA: 0s - loss: 0.2751 - accuracy: 0.9128 Epoch 00008: val_accuracy did not improve from 0.51320 256/256 [==============================] - 177s 691ms/step - loss: 0.2751 - accuracy: 0.9128 - val_loss: 2.6635 - val_accuracy: 0.4428 Epoch 9/30 256/256 [==============================] - ETA: 0s - loss: 0.2522 - accuracy: 0.9233 Epoch 00009: val_accuracy did not improve from 0.51320 256/256 [==============================] - 177s 691ms/step - loss: 0.2522 - accuracy: 0.9233 - val_loss: 2.8191 - val_accuracy: 0.4673 Epoch 10/30 256/256 [==============================] - ETA: 0s - loss: 0.2390 - accuracy: 0.9258 Epoch 00010: val_accuracy did not improve from 0.51320 256/256 [==============================] - 176s 689ms/step - loss: 0.2390 - accuracy: 0.9258 - val_loss: 2.5744 - val_accuracy: 0.4897 Epoch 11/30 256/256 [==============================] - ETA: 0s - loss: 0.2046 - accuracy: 0.9362 Epoch 00011: val_accuracy did not improve from 0.51320 256/256 [==============================] - 177s 691ms/step - loss: 0.2046 - accuracy: 0.9362 - val_loss: 3.2189 - val_accuracy: 0.4570 Epoch 12/30 256/256 [==============================] - ETA: 0s - loss: 0.1874 - accuracy: 0.9406 Epoch 00012: val_accuracy did not improve from 0.51320 256/256 [==============================] - 178s 695ms/step - loss: 0.1874 - accuracy: 0.9406 - val_loss: 3.1314 - val_accuracy: 0.4633 Epoch 13/30 256/256 [==============================] - ETA: 0s - loss: 0.1892 - accuracy: 0.9418 Epoch 00013: val_accuracy did not improve from 0.51320 256/256 [==============================] - 177s 692ms/step - loss: 0.1892 - accuracy: 0.9418 - val_loss: 2.6278 - val_accuracy: 0.5068 Epoch 14/30 256/256 [==============================] - ETA: 0s - loss: 0.1888 - accuracy: 0.9424 Epoch 00014: val_accuracy did not improve from 0.51320 256/256 [==============================] - 177s 693ms/step - loss: 0.1888 - accuracy: 0.9424 - val_loss: 2.8245 - val_accuracy: 0.4501 Epoch 15/30 256/256 [==============================] - ETA: 0s - loss: 0.1941 - accuracy: 0.9404 Epoch 00015: val_accuracy did not improve from 0.51320 256/256 [==============================] - 177s 693ms/step - loss: 0.1941 - accuracy: 0.9404 - val_loss: 2.7831 - val_accuracy: 0.5015 Epoch 16/30 256/256 [==============================] - ETA: 0s - loss: 0.1516 - accuracy: 0.9523 Epoch 00016: val_accuracy did not improve from 0.51320 256/256 [==============================] - 177s 691ms/step - loss: 0.1516 - accuracy: 0.9523 - val_loss: 2.7187 - val_accuracy: 0.5029 Epoch 17/30 256/256 [==============================] - ETA: 0s - loss: 0.1322 - accuracy: 0.9562 Epoch 00017: val_accuracy did not improve from 0.51320 256/256 [==============================] - 177s 690ms/step - loss: 0.1322 - accuracy: 0.9562 - val_loss: 2.9923 - val_accuracy: 0.4736 Epoch 18/30 256/256 [==============================] - ETA: 0s - loss: 0.1579 - accuracy: 0.9506 Epoch 00018: val_accuracy did not improve from 0.51320 256/256 [==============================] - 177s 690ms/step - loss: 0.1579 - accuracy: 0.9506 - val_loss: 2.8354 - val_accuracy: 0.4761 Epoch 19/30 256/256 [==============================] - ETA: 0s - loss: 0.1295 - accuracy: 0.9562 Epoch 00019: val_accuracy did not improve from 0.51320 256/256 [==============================] - 177s 692ms/step - loss: 0.1295 - accuracy: 0.9562 - val_loss: 3.0599 - val_accuracy: 0.4804 Epoch 20/30 256/256 [==============================] - ETA: 0s - loss: 0.1511 - accuracy: 0.9530 Epoch 00020: val_accuracy did not improve from 0.51320 256/256 [==============================] - 177s 693ms/step - loss: 0.1511 - accuracy: 0.9530 - val_loss: 2.6675 - val_accuracy: 0.5024 Epoch 21/30 256/256 [==============================] - ETA: 0s - loss: 0.1327 - accuracy: 0.9596 Epoch 00021: val_accuracy did not improve from 0.51320 256/256 [==============================] - 177s 691ms/step - loss: 0.1327 - accuracy: 0.9596 - val_loss: 3.1260 - val_accuracy: 0.4663 Epoch 22/30 256/256 [==============================] - ETA: 0s - loss: 0.1235 - accuracy: 0.9612 Epoch 00022: val_accuracy did not improve from 0.51320 256/256 [==============================] - 177s 692ms/step - loss: 0.1235 - accuracy: 0.9612 - val_loss: 3.7962 - val_accuracy: 0.4384 Epoch 23/30 256/256 [==============================] - ETA: 0s - loss: 0.1198 - accuracy: 0.9615 Epoch 00023: val_accuracy did not improve from 0.51320 256/256 [==============================] - 177s 691ms/step - loss: 0.1198 - accuracy: 0.9615 - val_loss: 3.4902 - val_accuracy: 0.4492 Epoch 24/30 256/256 [==============================] - ETA: 0s - loss: 0.1233 - accuracy: 0.9598 Epoch 00024: val_accuracy did not improve from 0.51320 256/256 [==============================] - 177s 690ms/step - loss: 0.1233 - accuracy: 0.9598 - val_loss: 3.1829 - val_accuracy: 0.4775 Epoch 25/30 256/256 [==============================] - ETA: 0s - loss: 0.1096 - accuracy: 0.9673 Epoch 00025: val_accuracy did not improve from 0.51320 256/256 [==============================] - 177s 691ms/step - loss: 0.1096 - accuracy: 0.9673 - val_loss: 2.8696 - val_accuracy: 0.5005 Epoch 26/30 256/256 [==============================] - ETA: 0s - loss: 0.0996 - accuracy: 0.9695 Epoch 00026: val_accuracy did not improve from 0.51320 256/256 [==============================] - 177s 691ms/step - loss: 0.0996 - accuracy: 0.9695 - val_loss: 2.8282 - val_accuracy: 0.4961 Epoch 27/30 256/256 [==============================] - ETA: 0s - loss: 0.0942 - accuracy: 0.9710 Epoch 00027: val_accuracy did not improve from 0.51320 256/256 [==============================] - 177s 691ms/step - loss: 0.0942 - accuracy: 0.9710 - val_loss: 3.3208 - val_accuracy: 0.4888 Epoch 28/30 256/256 [==============================] - ETA: 0s - loss: 0.1010 - accuracy: 0.9694 Epoch 00028: val_accuracy did not improve from 0.51320 256/256 [==============================] - 177s 690ms/step - loss: 0.1010 - accuracy: 0.9694 - val_loss: 3.1099 - val_accuracy: 0.4883 Epoch 29/30 256/256 [==============================] - ETA: 0s - loss: 0.1106 - accuracy: 0.9661 Epoch 00029: val_accuracy did not improve from 0.51320 256/256 [==============================] - 176s 689ms/step - loss: 0.1106 - accuracy: 0.9661 - val_loss: 2.9378 - val_accuracy: 0.4858 Epoch 30/30 256/256 [==============================] - ETA: 0s - loss: 0.0904 - accuracy: 0.9731 Epoch 00030: val_accuracy did not improve from 0.51320 256/256 [==============================] - 177s 691ms/step - loss: 0.0904 - accuracy: 0.9731 - val_loss: 3.4269 - val_accuracy: 0.4668
64/64 [==============================] - 12s 193ms/step - loss: 3.4269 - accuracy: 0.4668 Xceprion Fit Time : 5946.2579126949995 Xception Validation Accuracy : 0.4667644202709198 Xception Validation Loss: 3.4268548488616943
After we have performed a model training we will use it to make a prediction for the training set and the validation set so that we can perform a process of features extraction using another deep learning model
def create_image_train_valid_data_sets(train_dir , lables_df):
from tensorflow.keras.applications import imagenet_utils
images_array = []
lables_array = []
for file_name in os.listdir(train_dir):
lable = lables_df.loc[file_name , 'breed']
image_path = os.path.join(train_dir , file_name)
image = load_img(image_path , target_size=(224, 224))
image = img_to_array(image)
new_image = np.expand_dims(image, axis=0)
new_image = imagenet_utils.preprocess_input(new_image)
images_array.append(new_image)
lables_array.append(lable)
images_array = np.vstack(images_array)
split = int(len(images_array)/5 * 4)
train = images_array[:split]
train_lbl = lables_array[:split]
validation = images_array[split:]
validation_lbl = lables_array[split:]
print("Train set size : {}\nValidation set size : {}".format(len(train) , len(validation)))
return train,train_lbl,validation,validation_lbl
train_dir = r'C:\Users\ibitton\OneDrive - Intel Corporation\Desktop\Year 4\Deep Learning\Ass1\train\train'
lables_df = pd.read_csv(r'C:\Users\ibitton\OneDrive - Intel Corporation\Desktop\Year 4\Deep Learning\Ass1\train\labels.csv' , engine="python")
lables_df['id'] = lables_df['id'] + '.jpg'
lables_df.set_index('id',inplace=True)
train , train_lbls , validation , validation_lbls = create_image_train_valid_data_sets(train_dir,lables_df)
Train set size : 8177 Validation set size : 2045
print(train.shape)
print(np.array(train_lbls).shape)
print(validation.shape)
print(np.array(validation_lbls).shape)
(8177, 224, 224, 3) (8177,) (2045, 224, 224, 3) (2045,)
import timeit
model.load_weights(r'C:\Users\ibitton\OneDrive - Intel Corporation\Desktop\Year 4\Deep Learning\Ass1\Data\Xception.h5')
model_new = Model(model.input, model.layers[-2].output)
model_new.summary()
start = timeit.default_timer()
print("Start predict train")
predictions_train = model_new.predict(train , batch_size=32)
Model: "functional_3"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 224, 224, 3) 0
__________________________________________________________________________________________________
block1_conv1 (Conv2D) (None, 111, 111, 32) 864 input_1[0][0]
__________________________________________________________________________________________________
block1_conv1_bn (BatchNormaliza (None, 111, 111, 32) 128 block1_conv1[0][0]
__________________________________________________________________________________________________
block1_conv1_act (Activation) (None, 111, 111, 32) 0 block1_conv1_bn[0][0]
__________________________________________________________________________________________________
block1_conv2 (Conv2D) (None, 109, 109, 64) 18432 block1_conv1_act[0][0]
__________________________________________________________________________________________________
block1_conv2_bn (BatchNormaliza (None, 109, 109, 64) 256 block1_conv2[0][0]
__________________________________________________________________________________________________
block1_conv2_act (Activation) (None, 109, 109, 64) 0 block1_conv2_bn[0][0]
__________________________________________________________________________________________________
block2_sepconv1 (SeparableConv2 (None, 109, 109, 128 8768 block1_conv2_act[0][0]
__________________________________________________________________________________________________
block2_sepconv1_bn (BatchNormal (None, 109, 109, 128 512 block2_sepconv1[0][0]
__________________________________________________________________________________________________
block2_sepconv2_act (Activation (None, 109, 109, 128 0 block2_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block2_sepconv2 (SeparableConv2 (None, 109, 109, 128 17536 block2_sepconv2_act[0][0]
__________________________________________________________________________________________________
block2_sepconv2_bn (BatchNormal (None, 109, 109, 128 512 block2_sepconv2[0][0]
__________________________________________________________________________________________________
conv2d (Conv2D) (None, 55, 55, 128) 8192 block1_conv2_act[0][0]
__________________________________________________________________________________________________
block2_pool (MaxPooling2D) (None, 55, 55, 128) 0 block2_sepconv2_bn[0][0]
__________________________________________________________________________________________________
batch_normalization (BatchNorma (None, 55, 55, 128) 512 conv2d[0][0]
__________________________________________________________________________________________________
add (Add) (None, 55, 55, 128) 0 block2_pool[0][0]
batch_normalization[0][0]
__________________________________________________________________________________________________
block3_sepconv1_act (Activation (None, 55, 55, 128) 0 add[0][0]
__________________________________________________________________________________________________
block3_sepconv1 (SeparableConv2 (None, 55, 55, 256) 33920 block3_sepconv1_act[0][0]
__________________________________________________________________________________________________
block3_sepconv1_bn (BatchNormal (None, 55, 55, 256) 1024 block3_sepconv1[0][0]
__________________________________________________________________________________________________
block3_sepconv2_act (Activation (None, 55, 55, 256) 0 block3_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block3_sepconv2 (SeparableConv2 (None, 55, 55, 256) 67840 block3_sepconv2_act[0][0]
__________________________________________________________________________________________________
block3_sepconv2_bn (BatchNormal (None, 55, 55, 256) 1024 block3_sepconv2[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 28, 28, 256) 32768 add[0][0]
__________________________________________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0 block3_sepconv2_bn[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 28, 28, 256) 1024 conv2d_1[0][0]
__________________________________________________________________________________________________
add_1 (Add) (None, 28, 28, 256) 0 block3_pool[0][0]
batch_normalization_1[0][0]
__________________________________________________________________________________________________
block4_sepconv1_act (Activation (None, 28, 28, 256) 0 add_1[0][0]
__________________________________________________________________________________________________
block4_sepconv1 (SeparableConv2 (None, 28, 28, 728) 188672 block4_sepconv1_act[0][0]
__________________________________________________________________________________________________
block4_sepconv1_bn (BatchNormal (None, 28, 28, 728) 2912 block4_sepconv1[0][0]
__________________________________________________________________________________________________
block4_sepconv2_act (Activation (None, 28, 28, 728) 0 block4_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block4_sepconv2 (SeparableConv2 (None, 28, 28, 728) 536536 block4_sepconv2_act[0][0]
__________________________________________________________________________________________________
block4_sepconv2_bn (BatchNormal (None, 28, 28, 728) 2912 block4_sepconv2[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 14, 14, 728) 186368 add_1[0][0]
__________________________________________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 728) 0 block4_sepconv2_bn[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 14, 14, 728) 2912 conv2d_2[0][0]
__________________________________________________________________________________________________
add_2 (Add) (None, 14, 14, 728) 0 block4_pool[0][0]
batch_normalization_2[0][0]
__________________________________________________________________________________________________
block5_sepconv1_act (Activation (None, 14, 14, 728) 0 add_2[0][0]
__________________________________________________________________________________________________
block5_sepconv1 (SeparableConv2 (None, 14, 14, 728) 536536 block5_sepconv1_act[0][0]
__________________________________________________________________________________________________
block5_sepconv1_bn (BatchNormal (None, 14, 14, 728) 2912 block5_sepconv1[0][0]
__________________________________________________________________________________________________
block5_sepconv2_act (Activation (None, 14, 14, 728) 0 block5_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block5_sepconv2 (SeparableConv2 (None, 14, 14, 728) 536536 block5_sepconv2_act[0][0]
__________________________________________________________________________________________________
block5_sepconv2_bn (BatchNormal (None, 14, 14, 728) 2912 block5_sepconv2[0][0]
__________________________________________________________________________________________________
block5_sepconv3_act (Activation (None, 14, 14, 728) 0 block5_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block5_sepconv3 (SeparableConv2 (None, 14, 14, 728) 536536 block5_sepconv3_act[0][0]
__________________________________________________________________________________________________
block5_sepconv3_bn (BatchNormal (None, 14, 14, 728) 2912 block5_sepconv3[0][0]
__________________________________________________________________________________________________
add_3 (Add) (None, 14, 14, 728) 0 block5_sepconv3_bn[0][0]
add_2[0][0]
__________________________________________________________________________________________________
block6_sepconv1_act (Activation (None, 14, 14, 728) 0 add_3[0][0]
__________________________________________________________________________________________________
block6_sepconv1 (SeparableConv2 (None, 14, 14, 728) 536536 block6_sepconv1_act[0][0]
__________________________________________________________________________________________________
block6_sepconv1_bn (BatchNormal (None, 14, 14, 728) 2912 block6_sepconv1[0][0]
__________________________________________________________________________________________________
block6_sepconv2_act (Activation (None, 14, 14, 728) 0 block6_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block6_sepconv2 (SeparableConv2 (None, 14, 14, 728) 536536 block6_sepconv2_act[0][0]
__________________________________________________________________________________________________
block6_sepconv2_bn (BatchNormal (None, 14, 14, 728) 2912 block6_sepconv2[0][0]
__________________________________________________________________________________________________
block6_sepconv3_act (Activation (None, 14, 14, 728) 0 block6_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block6_sepconv3 (SeparableConv2 (None, 14, 14, 728) 536536 block6_sepconv3_act[0][0]
__________________________________________________________________________________________________
block6_sepconv3_bn (BatchNormal (None, 14, 14, 728) 2912 block6_sepconv3[0][0]
__________________________________________________________________________________________________
add_4 (Add) (None, 14, 14, 728) 0 block6_sepconv3_bn[0][0]
add_3[0][0]
__________________________________________________________________________________________________
block7_sepconv1_act (Activation (None, 14, 14, 728) 0 add_4[0][0]
__________________________________________________________________________________________________
block7_sepconv1 (SeparableConv2 (None, 14, 14, 728) 536536 block7_sepconv1_act[0][0]
__________________________________________________________________________________________________
block7_sepconv1_bn (BatchNormal (None, 14, 14, 728) 2912 block7_sepconv1[0][0]
__________________________________________________________________________________________________
block7_sepconv2_act (Activation (None, 14, 14, 728) 0 block7_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block7_sepconv2 (SeparableConv2 (None, 14, 14, 728) 536536 block7_sepconv2_act[0][0]
__________________________________________________________________________________________________
block7_sepconv2_bn (BatchNormal (None, 14, 14, 728) 2912 block7_sepconv2[0][0]
__________________________________________________________________________________________________
block7_sepconv3_act (Activation (None, 14, 14, 728) 0 block7_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block7_sepconv3 (SeparableConv2 (None, 14, 14, 728) 536536 block7_sepconv3_act[0][0]
__________________________________________________________________________________________________
block7_sepconv3_bn (BatchNormal (None, 14, 14, 728) 2912 block7_sepconv3[0][0]
__________________________________________________________________________________________________
add_5 (Add) (None, 14, 14, 728) 0 block7_sepconv3_bn[0][0]
add_4[0][0]
__________________________________________________________________________________________________
block8_sepconv1_act (Activation (None, 14, 14, 728) 0 add_5[0][0]
__________________________________________________________________________________________________
block8_sepconv1 (SeparableConv2 (None, 14, 14, 728) 536536 block8_sepconv1_act[0][0]
__________________________________________________________________________________________________
block8_sepconv1_bn (BatchNormal (None, 14, 14, 728) 2912 block8_sepconv1[0][0]
__________________________________________________________________________________________________
block8_sepconv2_act (Activation (None, 14, 14, 728) 0 block8_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block8_sepconv2 (SeparableConv2 (None, 14, 14, 728) 536536 block8_sepconv2_act[0][0]
__________________________________________________________________________________________________
block8_sepconv2_bn (BatchNormal (None, 14, 14, 728) 2912 block8_sepconv2[0][0]
__________________________________________________________________________________________________
block8_sepconv3_act (Activation (None, 14, 14, 728) 0 block8_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block8_sepconv3 (SeparableConv2 (None, 14, 14, 728) 536536 block8_sepconv3_act[0][0]
__________________________________________________________________________________________________
block8_sepconv3_bn (BatchNormal (None, 14, 14, 728) 2912 block8_sepconv3[0][0]
__________________________________________________________________________________________________
add_6 (Add) (None, 14, 14, 728) 0 block8_sepconv3_bn[0][0]
add_5[0][0]
__________________________________________________________________________________________________
block9_sepconv1_act (Activation (None, 14, 14, 728) 0 add_6[0][0]
__________________________________________________________________________________________________
block9_sepconv1 (SeparableConv2 (None, 14, 14, 728) 536536 block9_sepconv1_act[0][0]
__________________________________________________________________________________________________
block9_sepconv1_bn (BatchNormal (None, 14, 14, 728) 2912 block9_sepconv1[0][0]
__________________________________________________________________________________________________
block9_sepconv2_act (Activation (None, 14, 14, 728) 0 block9_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block9_sepconv2 (SeparableConv2 (None, 14, 14, 728) 536536 block9_sepconv2_act[0][0]
__________________________________________________________________________________________________
block9_sepconv2_bn (BatchNormal (None, 14, 14, 728) 2912 block9_sepconv2[0][0]
__________________________________________________________________________________________________
block9_sepconv3_act (Activation (None, 14, 14, 728) 0 block9_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block9_sepconv3 (SeparableConv2 (None, 14, 14, 728) 536536 block9_sepconv3_act[0][0]
__________________________________________________________________________________________________
block9_sepconv3_bn (BatchNormal (None, 14, 14, 728) 2912 block9_sepconv3[0][0]
__________________________________________________________________________________________________
add_7 (Add) (None, 14, 14, 728) 0 block9_sepconv3_bn[0][0]
add_6[0][0]
__________________________________________________________________________________________________
block10_sepconv1_act (Activatio (None, 14, 14, 728) 0 add_7[0][0]
__________________________________________________________________________________________________
block10_sepconv1 (SeparableConv (None, 14, 14, 728) 536536 block10_sepconv1_act[0][0]
__________________________________________________________________________________________________
block10_sepconv1_bn (BatchNorma (None, 14, 14, 728) 2912 block10_sepconv1[0][0]
__________________________________________________________________________________________________
block10_sepconv2_act (Activatio (None, 14, 14, 728) 0 block10_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block10_sepconv2 (SeparableConv (None, 14, 14, 728) 536536 block10_sepconv2_act[0][0]
__________________________________________________________________________________________________
block10_sepconv2_bn (BatchNorma (None, 14, 14, 728) 2912 block10_sepconv2[0][0]
__________________________________________________________________________________________________
block10_sepconv3_act (Activatio (None, 14, 14, 728) 0 block10_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block10_sepconv3 (SeparableConv (None, 14, 14, 728) 536536 block10_sepconv3_act[0][0]
__________________________________________________________________________________________________
block10_sepconv3_bn (BatchNorma (None, 14, 14, 728) 2912 block10_sepconv3[0][0]
__________________________________________________________________________________________________
add_8 (Add) (None, 14, 14, 728) 0 block10_sepconv3_bn[0][0]
add_7[0][0]
__________________________________________________________________________________________________
block11_sepconv1_act (Activatio (None, 14, 14, 728) 0 add_8[0][0]
__________________________________________________________________________________________________
block11_sepconv1 (SeparableConv (None, 14, 14, 728) 536536 block11_sepconv1_act[0][0]
__________________________________________________________________________________________________
block11_sepconv1_bn (BatchNorma (None, 14, 14, 728) 2912 block11_sepconv1[0][0]
__________________________________________________________________________________________________
block11_sepconv2_act (Activatio (None, 14, 14, 728) 0 block11_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block11_sepconv2 (SeparableConv (None, 14, 14, 728) 536536 block11_sepconv2_act[0][0]
__________________________________________________________________________________________________
block11_sepconv2_bn (BatchNorma (None, 14, 14, 728) 2912 block11_sepconv2[0][0]
__________________________________________________________________________________________________
block11_sepconv3_act (Activatio (None, 14, 14, 728) 0 block11_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block11_sepconv3 (SeparableConv (None, 14, 14, 728) 536536 block11_sepconv3_act[0][0]
__________________________________________________________________________________________________
block11_sepconv3_bn (BatchNorma (None, 14, 14, 728) 2912 block11_sepconv3[0][0]
__________________________________________________________________________________________________
add_9 (Add) (None, 14, 14, 728) 0 block11_sepconv3_bn[0][0]
add_8[0][0]
__________________________________________________________________________________________________
block12_sepconv1_act (Activatio (None, 14, 14, 728) 0 add_9[0][0]
__________________________________________________________________________________________________
block12_sepconv1 (SeparableConv (None, 14, 14, 728) 536536 block12_sepconv1_act[0][0]
__________________________________________________________________________________________________
block12_sepconv1_bn (BatchNorma (None, 14, 14, 728) 2912 block12_sepconv1[0][0]
__________________________________________________________________________________________________
block12_sepconv2_act (Activatio (None, 14, 14, 728) 0 block12_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block12_sepconv2 (SeparableConv (None, 14, 14, 728) 536536 block12_sepconv2_act[0][0]
__________________________________________________________________________________________________
block12_sepconv2_bn (BatchNorma (None, 14, 14, 728) 2912 block12_sepconv2[0][0]
__________________________________________________________________________________________________
block12_sepconv3_act (Activatio (None, 14, 14, 728) 0 block12_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block12_sepconv3 (SeparableConv (None, 14, 14, 728) 536536 block12_sepconv3_act[0][0]
__________________________________________________________________________________________________
block12_sepconv3_bn (BatchNorma (None, 14, 14, 728) 2912 block12_sepconv3[0][0]
__________________________________________________________________________________________________
add_10 (Add) (None, 14, 14, 728) 0 block12_sepconv3_bn[0][0]
add_9[0][0]
__________________________________________________________________________________________________
block13_sepconv1_act (Activatio (None, 14, 14, 728) 0 add_10[0][0]
__________________________________________________________________________________________________
block13_sepconv1 (SeparableConv (None, 14, 14, 728) 536536 block13_sepconv1_act[0][0]
__________________________________________________________________________________________________
block13_sepconv1_bn (BatchNorma (None, 14, 14, 728) 2912 block13_sepconv1[0][0]
__________________________________________________________________________________________________
block13_sepconv2_act (Activatio (None, 14, 14, 728) 0 block13_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block13_sepconv2 (SeparableConv (None, 14, 14, 1024) 752024 block13_sepconv2_act[0][0]
__________________________________________________________________________________________________
block13_sepconv2_bn (BatchNorma (None, 14, 14, 1024) 4096 block13_sepconv2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 7, 7, 1024) 745472 add_10[0][0]
__________________________________________________________________________________________________
block13_pool (MaxPooling2D) (None, 7, 7, 1024) 0 block13_sepconv2_bn[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 7, 7, 1024) 4096 conv2d_3[0][0]
__________________________________________________________________________________________________
add_11 (Add) (None, 7, 7, 1024) 0 block13_pool[0][0]
batch_normalization_3[0][0]
__________________________________________________________________________________________________
block14_sepconv1 (SeparableConv (None, 7, 7, 1536) 1582080 add_11[0][0]
__________________________________________________________________________________________________
block14_sepconv1_bn (BatchNorma (None, 7, 7, 1536) 6144 block14_sepconv1[0][0]
__________________________________________________________________________________________________
block14_sepconv1_act (Activatio (None, 7, 7, 1536) 0 block14_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block14_sepconv2 (SeparableConv (None, 7, 7, 2048) 3159552 block14_sepconv1_act[0][0]
__________________________________________________________________________________________________
block14_sepconv2_bn (BatchNorma (None, 7, 7, 2048) 8192 block14_sepconv2[0][0]
__________________________________________________________________________________________________
block14_sepconv2_act (Activatio (None, 7, 7, 2048) 0 block14_sepconv2_bn[0][0]
__________________________________________________________________________________________________
global_max_pooling2d (GlobalMax (None, 2048) 0 block14_sepconv2_act[0][0]
==================================================================================================
Total params: 20,861,480
Trainable params: 20,806,952
Non-trainable params: 54,528
__________________________________________________________________________________________________
Start predict train
print("Start predict Validation")
start = timeit.default_timer()
predictions_validation = model_new.predict(validation , batch_size=32)
stop = timeit.default_timer()
predict_time_xc = stop - start
print("Xceprion Predict Time : {}".format(predict_time_xc))
Start predict Validation Xceprion Predict Time : 203.91974259999984
For features extraction we decided to use SVM Model and Logistic Regression:
We trained the SVM Model and Logistic Regression on the predication of the training set we get from Xception model - matrix with the size of (number of images X 2048)
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
lrc = LogisticRegression(n_jobs=8,solver='lbfgs',multi_class='auto')
svm = SVC(max_iter=8000,gamma='auto')
start = timeit.default_timer()
lrc.fit(predictions_train , np.array(train_lbls))
stop = timeit.default_timer()
fit_time_lrc = stop - start
start = timeit.default_timer()
svm.fit(predictions_train , np.array(train_lbls))
stop = timeit.default_timer()
fit_time_svm = stop - start
print("LRC Model Fit Time : {}".format(fit_time_lrc))
print("SVM Model Fit Time : {}".format(fit_time_svm))
LRC Model Fit Time : 58.63662929999987 SVM Model Fit Time : 200.5441163999999
After the training we will measure the Accuracy on the predication of the validation set that we get from Xception model - matrix with the size of (number of images X 2048)
import seaborn as sns
from sklearn import metrics
start = timeit.default_timer()
predictions_lrc = lrc.predict(predictions_validation)
stop = timeit.default_timer()
predict_time_lrc = stop - start
start = timeit.default_timer()
predictions_svm = svm.predict(predictions_validation)
stop = timeit.default_timer()
predict_time_svm = stop - start
print("LRC Model Predict Time : {}".format(predict_time_lrc))
print("SVM Model Predict Time : {}".format(predict_time_svm))
lrc_val_acc = metrics.accuracy_score(validation_lbls,predictions_lrc)
svm_val_acc = metrics.accuracy_score(validation_lbls,predictions_svm)
print("LRC Model Validation Accuracy: {}".format(lrc_val_acc))
print("SVM Model Validation Accuracy: {}".format(svm_val_acc))
LRC Model Predict Time : 0.043863599999895087 SVM Model Predict Time : 47.10599140000022 LRC Model Validation Accuracy: 0.36332518337408315 SVM Model Validation Accuracy: 0.258679706601467
from sklearn.metrics import confusion_matrix
confusion_matrix(validation_lbls,predictions_svm)
array([[ 8, 0, 0, ..., 0, 0, 0],
[ 0, 12, 0, ..., 1, 0, 0],
[ 0, 0, 6, ..., 1, 0, 0],
...,
[ 0, 0, 0, ..., 2, 0, 0],
[ 0, 0, 0, ..., 0, 2, 0],
[ 2, 0, 0, ..., 0, 0, 2]], dtype=int64)
confusion_matrix(validation_lbls,predictions_lrc)
array([[ 3, 0, 0, ..., 0, 0, 3],
[ 0, 11, 0, ..., 0, 0, 0],
[ 1, 0, 9, ..., 1, 0, 0],
...,
[ 0, 0, 1, ..., 3, 0, 0],
[ 0, 0, 0, ..., 0, 8, 0],
[ 0, 0, 0, ..., 0, 0, 5]], dtype=int64)
Now we will compare these two models
xception_input_size = "(224,224,3)"
svm_input_size = "(1,2048)"
lrc_input_size = "(1,2048)"
summariz_df = pd.DataFrame(columns=[ '' , 'Xception Model' , 'SVM Model' , 'LRC Model'])
summariz_df[''] = ['Fit Time Sec' , 'Prediction Time Sec' , 'Input Size' , 'Validation Accuracy']
summariz_df.set_index('' , inplace=True)
summariz_df.loc['Fit Time Sec' , 'Xception Model'] = 5946.257
summariz_df.loc['Prediction Time Sec' , 'Xception Model'] = 203.919
summariz_df.loc['Input Size' , 'Xception Model'] = xception_input_size
summariz_df.loc['Validation Accuracy' , 'Xception Model'] = 0.466
summariz_df.loc['Fit Time Sec' , 'SVM Model'] = 200.544
summariz_df.loc['Prediction Time Sec' , 'SVM Model'] = 47.105
summariz_df.loc['Input Size' , 'SVM Model'] = svm_input_size
summariz_df.loc['Validation Accuracy' , 'SVM Model'] = 0.258
summariz_df.loc['Fit Time Sec' , 'LRC Model'] = 58.636
summariz_df.loc['Prediction Time Sec' , 'LRC Model'] = 0.043
summariz_df.loc['Input Size' , 'LRC Model'] = lrc_input_size
summariz_df.loc['Validation Accuracy' , 'LRC Model'] = 0.363
summariz_df
| Xception Model | SVM Model | LRC Model | |
|---|---|---|---|
| Fit Time Sec | 5946.26 | 200.544 | 58.636 |
| Prediction Time Sec | 203.919 | 47.105 | 0.043 |
| Input Size | (224,224,3) | (1,2048) | (1,2048) |
| Validation Accuracy | 0.466 | 0.258 | 0.363 |
In this comparison it can be seen that with a trained Xception model we will reach higher validation accuracy but compared to linear regression we got a lower accuracy value but in a ratio of 10 percent in less training time and prediction time
def save_model(model,filename):
# this is a helper function used to save a keras NN model architecture and weights
json_string = model.to_json()
if not os.path.isdir('cache'):
os.mkdir('cache')
open(save_dir+"text11.json", 'w').write(json_string)
model.save_weights(save_dir+"weight11.h5", overwrite=True)
save_model(first_model,"first")
In this project we did multi-class image classification with dog breed dataset. This is the first time we have tried to deal with such a big dataset , neural networks and deep learning models in general. Before we started trying to build a nueral network model we did data anlysis on the dataset we got. We saw that we have a relatively small amount of pictures for each dog breed and a large amount of different dog breeds so we concluded that the learning will be long. The first model we built was the basic model containing a small number of layers. We wanted to see the performance of the model, and whether it learns specifically for the training photos. We kept the simplicity of the model so that we did not use augmentation and a lot of convolution layers. In the results of the model we saw that within a small number of moves the model comes to overfiting so we realized that we need to simplify the model. The way we chose to simplify the model was to add convolution layers, add BatchNormalization layers and increase the Dropout value. Finally we ran a model that augments the images. After analyzing the results of this model, we saw that although the model takes longer to reach high accuracy values, it performs controlled and slow learning in both the training data and the validation data. We assume according to the progress of the values in all epochs that if we perform this model on a higher number of epochs we would get higher accuracy values.
In those 3 models, the amount of layers was already large and the model was more deep than the basic. These models were challenging because they took a longer run time and as a result the run take long time and code crashes. We worked on Google Colab and used GPU, and sometimes tried to run the notebook by anaconda on our computers. if we were not limited in terms of space and running time we would increase the running time to a larger amount of epochs and create a model with more layers so that it has more depth. In the second part we had to choose a trained model from those presented in the lectures as well as other models we found online, and we chose the Xception model to perform this part. For this model, we removed the last layer and added another layer that fits the trained model to our classification problem. We used trained weights from ImageNet and adjusted the image size to the input size of the trained model we chose.We trained the model on our dataset and divided the dataset so that 80% of it became the training data while 20% of it became the test data. During the analysis of the results we saw that this model receives much higher values than the models we performed in Part A. It can be understood that the depth and complexity of the model helps it to perform better learning.
After we performed a feature extractor on the trained model. To do this we removed the last layer and used the output we got from the model (output size 2048 features). During the analysis of the data we saw that the amount of features coming out of this model is relatively small compared to other models online and therefore we concluded that during the execution of the feature extractor we will get relatively low values. In addition, we assumed that if we performed a feature extractor on a larger amount of features the model accuracy results would be higher. After extracting the features we wanted to train the model on our dataset to check if feature extractor can be effective. We chose to use 2 models to perform the predict - SVM and Logistics regression. We tested their accuracy values and compared their accuracy values with the accuracy values of the original trained model. We have seen that although the accuracy values are lower but the time it takes to execute is faster and therefore the situation is tradeoff between time and accuracy.
In conclusion, this work was challenging , we learned a lot, and we felt we got the tools and understanding that need for starting with machine learning.